From patchwork Wed Feb 12 06:27:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13971033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91639C021A5 for ; Wed, 12 Feb 2025 06:32:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0DB1D280003; Wed, 12 Feb 2025 01:32:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 06262280001; Wed, 12 Feb 2025 01:32:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1E9D280003; Wed, 12 Feb 2025 01:32:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BBA62280001 for ; Wed, 12 Feb 2025 01:32:31 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 43941B1327 for ; Wed, 12 Feb 2025 06:32:31 +0000 (UTC) X-FDA: 83110323702.29.4E6F412 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf22.hostedemail.com (Postfix) with ESMTP id 63B4FC000A for ; Wed, 12 Feb 2025 06:32:29 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=EJtEQ0sH; spf=pass (imf22.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.216.52 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739341949; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QEMWhAClB1d5xqcLAEaUS1zNcFLgRU1myRFnr10zi+A=; b=CEaB991rzGfQyhhjIPV09WYHYN+v9WmUI4vM1VYCSq6sqqZqvRag0qxtzUb+wuePvZMd8H e7RMRZQVK2BT8whABhvxrvGFy80Frm7VNg1Rk41ZrxHqIqXOfWpvK98Jy8zqLc28IYFe6G l+d+RzMOeoUwR6gbvTM2kJbTJ0GOImI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739341949; a=rsa-sha256; cv=none; b=y1csRUn4nSrGCzOuTotP5r3ren2e/j8Fg7af+r/IcOMlF0G7hhQVHcgy2JYuuiNOSPLp9v tJQ1vISpTyGqGAwTcOf27i9wmwmCjny0u2TcmdzGNPGX6YMksci5eW+1Kz+RAxHUIOsvaZ U6igQQEi8qw3DPlcSJetwnYhOOmcpr8= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=EJtEQ0sH; spf=pass (imf22.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.216.52 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-2fa3fe04dd2so6249462a91.0 for ; Tue, 11 Feb 2025 22:32:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1739341948; x=1739946748; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QEMWhAClB1d5xqcLAEaUS1zNcFLgRU1myRFnr10zi+A=; b=EJtEQ0sHkoJlClzWBW8roHSxA8Wqogw+jqPOTAGCbEB/p04ZXGqfbkxWTAuE3q5eqW 3kp4h34OK6JQjpTlWOa10x6UqtZE7TgqFqrt/h5KapveLUuC7Ai2f3H1rt/3rUqpG3aI 8oB0WiQr+pkaXKvEtCcUZlqTNx2YzwFq35x/o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739341948; x=1739946748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QEMWhAClB1d5xqcLAEaUS1zNcFLgRU1myRFnr10zi+A=; b=ilaV0BY2DhBxcUGRlD6uVIJnPCKyOmW0qmUUTRVaPCvyqZy9WHCWE/LWneLC5hgOvc HITLUdk3HCg0D62Dst+jlIO1ApHo72Lhm+kbRbZ5ODKrGRLKuUMUutWSoHXFjSy/tSL5 zb+iR+0sSKinn9ji9vHkl0AxaMVLWm2xiEAGFQj0oQNamnzaz9+56v+xWCLQ/5lOXRAP fcvbX3L7Y9FXeCf8aA8ceyOXUQ+nxxKn3PktCk81F1tLGJ8MQ0Mfi2QwIX/jix1zRFXE Hx/lFjWh8SHjx8gQKDZjP+T+cmRCfM0CFlcjdDRVRJo0ny1WfPopYbgVAxSHAPW+mjXT W4Wg== X-Forwarded-Encrypted: i=1; AJvYcCVSKZ49+6Bfg+cVGZekFghbPmeYX8Nl9C6lmLsZMC2ZuE+uH5RZE1PR02938VpAE0koBZ2d5qNY9A==@kvack.org X-Gm-Message-State: AOJu0Yy6HA0SIEKGOOCNAqAFuPshkmJSnTK8GaJouMgPmGXclHCuVNDj 9hlMix1Nn9c9mLvrK16azOMKnMRP+RbN27WlRltjvn2PfrF+8w4Mf8tFdDjqCQ== X-Gm-Gg: ASbGnctPDoeR8lNVJWbKkuBGWi77Qpobi8+UYzhz3NHmjmxrz6JFysU8np3bLMiTGd8 fOz+Xo1KYxtvHOIZDvGYxkdTCUhet3/cbbCSCyxVlS+V+HSI8nFNFicC7tAmlhu75Je7ZIFFLNn 3Xuy+L/Vh7id9T5ZfVLPOe5inXERySDSIuZt9VRsUaru6mb5rs1PDGRIvkk49YnppO+Bs9ccfWz AnjJLHb7Oeu8BP0/9bPPi5yTAUh7+Nd7ZIyQQLamT7px4SVKyZWzSRV+xDls6BiUdgFFJHZq39+ wHvRX9UMh7X8ehwLfg== X-Google-Smtp-Source: AGHT+IFA+EnqkupBVK3/MEPCQX2ZjyTTOUvx22Z0kt6I03ZDDZCylIMnWUqaHODE62jxmfspsUlwkQ== X-Received: by 2002:a05:6a00:804:b0:728:e2cc:bfd6 with SMTP id d2e1a72fcca58-7322c43107bmr3332267b3a.18.1739341948039; Tue, 11 Feb 2025 22:32:28 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:69f5:6852:451e:8142]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-730862e72a4sm5749487b3a.103.2025.02.11.22.32.25 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 11 Feb 2025 22:32:27 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Kairui Song , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v5 05/18] zram: remove two-staged handle allocation Date: Wed, 12 Feb 2025 15:27:03 +0900 Message-ID: <20250212063153.179231-6-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog In-Reply-To: <20250212063153.179231-1-senozhatsky@chromium.org> References: <20250212063153.179231-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 63B4FC000A X-Rspamd-Server: rspam07 X-Stat-Signature: enb4nb83jcesr5g8rm99r3a33qrgpphp X-HE-Tag: 1739341949-764000 X-HE-Meta: U2FsdGVkX18Hxl132uFL5nBQ2+l+3tkSUurxsL/WtIzT8qVrnrbWJ9Lp1sKowOxYxCQx/srs10E3SG5uMKzXh0BVf6xIS3FbPG7tU5rZJ1+whOixTrjG9wUXQVJHmwJBkCWmvNtFKp6XyYrbazKXnZgqUIds6llKpKYaVUs1rFpyKG41znEtA9LchNO2O2He8SEpwyD/bJDOBnjqshWFbgkcrEA3MoRKSWmnGgJm0HPOKNS0AMC7uhL6uDaJ3SeY0//sFj5/4Y9AOzJjbFFQZQc0k2aCuJXwjF9rtxS5BDcffqbB0xXh8lg1jlb6iWmXmfIwHkyZNfwg8NtHBJrbSUrDYETu8pojp+ZU/8Zy6EgRlVLaf8MPYj5RWWpCC3Kdg4Pxss56kuz5MfQRSJNv9H3xQFlrvTQgOkp9qvvcKRrv/gM2I77la2cvyHRNWYKG/3bELlBWRnZZwTWt/85HOc/eZwOkcJsks/halnCpp6cDB5Hc25UHDMUrK/98GI8BBUWcNtRcvMEsiqHh7kzo8dzs/A0fxjZLRW4/lefWZ64+Re7EGbpKhVCyrHfXmZ1hyZUI1ayXTnpG+Tcglj9uzB0nj75ukn00T3wSkmErLjWDkOrYJQzPJc1T2Nm1ys5MlgeiF7fq3MTskxE9P+RHD0KnFQhV0ocnte74JcJO2MeUiEoq0W3JB6yFnvLcerQ8X61idD79fyb4lCHuKM2Wep6Ui5zb994BCN3jD6ZKhBIzp8e5IJgiH73dVUbvwklnP6cfm7sgsBNleQWCpkv4ETkzrWcsIR+3Lsz7xGjtSJ6O0CfLrd0sZQPmPYC9U3O7ZY6AL1S2GE5g6+ythPDrHDNOatKgIgoWLOZJGVgy0KrImEp8My/9NcjZO3TOnbmVUTUEWPYvAlqgdY8AQ2s+oyjS7r6vZCPVbAQ0H0Il8UhiZmhXseNynsdVB7/JwW4vJ3sUUJJkSttaBuM98Fg JtpPFHhl epcEByrHpan8rGmYWCkbOm9rCCbbm2zKQMUGpWpuuGLulQun9h5LlnfRSJ/vb4+SU8sxUzRM0pM9EqbCs/aZ+uZaS5aKjPYGDc+smvwgAOrWfazCj4pvl9CV+MXf/Wa2eDQgEI3DJCyugrNiJqU1mLvq2o3uIK2XTlYVIi2X5Qgo0vqrgZmyG6dGpr0LiAK/sdwG6q2sQwn2cHF/RZ3IRRroHh9l0rUfE7RhHKbm1C75oGHrRrJlbaMXR9vMb+xOoYeu7vDykY0ZsaSjAgUP5mlSGAOGlJrmekO0XqHB28Lme34MebOEhPuZOgg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previously zram write() was atomic which required us to pass __GFP_KSWAPD_RECLAIM to zsmalloc handle allocation on a fast path and attempt a slow path allocation (with recompression) when the fast path failed. Since it's not atomic anymore we can permit direct reclaim during allocation, and remove fast allocation path and, also, drop the recompression path (which should reduce CPU/battery usage). Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 38 ++++++----------------------------- 1 file changed, 6 insertions(+), 32 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index e0e64b2610d6..6384c61c03bf 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1724,11 +1724,11 @@ static int write_incompressible_page(struct zram *zram, struct page *page, static int zram_write_page(struct zram *zram, struct page *page, u32 index) { int ret = 0; - unsigned long handle = -ENOMEM; - unsigned int comp_len = 0; + unsigned long handle; + unsigned int comp_len; void *dst, *mem; struct zcomp_strm *zstrm; - unsigned long element = 0; + unsigned long element; bool same_filled; /* First, free memory allocated to this slot (if any) */ @@ -1742,7 +1742,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) if (same_filled) return write_same_filled_page(zram, element, index); -compress_again: zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); mem = kmap_local_page(page); ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm, @@ -1752,7 +1751,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) if (unlikely(ret)) { zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); - zs_free(zram->mem_pool, handle); return ret; } @@ -1761,35 +1759,11 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return write_incompressible_page(zram, page, index); } - /* - * handle allocation has 2 paths: - * a) fast path is executed with preemption disabled (for - * per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear, - * since we can't sleep; - * b) slow path enables preemption and attempts to allocate - * the page with __GFP_DIRECT_RECLAIM bit set. we have to - * put per-cpu compression stream and, thus, to re-do - * the compression once handle is allocated. - * - * if we have a 'non-null' handle here then we are coming - * from the slow path and handle has already been allocated. - */ - if (IS_ERR_VALUE(handle)) - handle = zs_malloc(zram->mem_pool, comp_len, - __GFP_KSWAPD_RECLAIM | - __GFP_NOWARN | - __GFP_HIGHMEM | - __GFP_MOVABLE); + handle = zs_malloc(zram->mem_pool, comp_len, + GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { zcomp_stream_put(zstrm); - atomic64_inc(&zram->stats.writestall); - handle = zs_malloc(zram->mem_pool, comp_len, - GFP_NOIO | __GFP_HIGHMEM | - __GFP_MOVABLE); - if (IS_ERR_VALUE(handle)) - return PTR_ERR((void *)handle); - - goto compress_again; + return PTR_ERR((void *)handle); } if (!zram_can_store_page(zram)) {