From patchwork Mon Jan 29 17:54:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13536154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0686EC47DDB for ; Mon, 29 Jan 2024 17:55:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 917DC6B0074; Mon, 29 Jan 2024 12:55:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C6EC6B0075; Mon, 29 Jan 2024 12:55:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 740FC6B0078; Mon, 29 Jan 2024 12:55:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5E61A6B0074 for ; Mon, 29 Jan 2024 12:55:10 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BCC861C066A for ; Mon, 29 Jan 2024 17:55:09 +0000 (UTC) X-FDA: 81733099938.17.39F0414 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf10.hostedemail.com (Postfix) with ESMTP id E2567C0024 for ; Mon, 29 Jan 2024 17:55:07 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FWAzZIbk; spf=pass (imf10.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706550907; a=rsa-sha256; cv=none; b=x4LQPUeIWwsz1jOTzI60v8XDcqXvtPYFBcw4jNzeBvv8nFBleK8a9bpLFJCobIMQFRHd2U TmGwr6yQ7i9M1QGgVJa4ClJ3/Pi6TJKOlSeAPfPufoqOafkSFB0Wu+vYhpHdd0jA70tSdo pupExnjGFNf1+01MFL9E198qNNSiK4I= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FWAzZIbk; spf=pass (imf10.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706550907; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NOwwXIxywhgTYH3p3yq4nMufS2IRUoqO/tf1yKZ7qLU=; b=sfAg8DJZkjgwlGXQk+T6oqEdbrq6kodt8iKzI3pTJDF/xYCYNk418asZ7fTGpBf3H8sujb TGAkGXzkHtWXLVDsBQQpHHnFUudnFa5jTZSbyaeOzOoMBad/dsuWdrqNB5uP5oecxJB64i dMD6v/y3ggs/7CzYgqyijr0Fdq8zudI= Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-6da6b0eb2d4so1566364b3a.1 for ; Mon, 29 Jan 2024 09:55:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706550906; x=1707155706; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=NOwwXIxywhgTYH3p3yq4nMufS2IRUoqO/tf1yKZ7qLU=; b=FWAzZIbkdPfLTDSByg9S41kGrSxodmjcAeqekX8feOpl+STbGmQZxfQL4OWuEZHb6Y MHRQKQaADTomF9jQuJiEq485vovItbwUgXEJlI5aZ0rU7H9TdO2s65/vAZxiiYrAimm5 0QEvv+Pr5sKNv1SuUdF+GOuWoT7SLssDoz5rQ0q6THmAg6KO6mbZAC1eTlH5+e1XsDjh vPa8T9mTMG6x1qCml9y2oupboyPUhXz++UECOoxucs/hPlPoi8yLGhUxTce2oGFxILk9 N0heCTQhbI5aQioLvDdkgDRyIisyZB+Z9S/YoI6n/oG8aKtTjYkNTxELz1iY+2DhGAoN idOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706550906; x=1707155706; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NOwwXIxywhgTYH3p3yq4nMufS2IRUoqO/tf1yKZ7qLU=; b=Rjw1z5Z+aF2xx8GQqJzz7bvoFPJ4atQ3GQSjxdYsAEQj0uIBu9hVcHv0kKdw7jCLKN htoEwlLlozBDS5Z3V+XcJlCgizwQZauM6VSEFS0r8MkTykZfTgF9bsp1LXdHnk120bQE vPOPD+sDgDv/44pkamVvhn2pl9awkTnvvzTcf3KsTpbC5ILZNQ+AGNzH8XN+u6jEIZVb WLlZ4trjsvAvY0FrgQ7I4jHvkvNgFtqs+fDcF9x+LViYrNY6DUyMc5AQGlcjh4r8+xca CjpxduuBwGJkWfUmQmpGkhM7Gpo+/nYpDCKYyvpckU0SuRi21WmPPZK7JTi/L1B5/XBZ NqoQ== X-Gm-Message-State: AOJu0YyVL/5xHWwiVhxe30pO7vhwaNjwFcGbyXnoXunKDAEgJTX2usGP J7kWoOi0wf+9cHhoniRHvSuQ4BmcaA0RsbID8H38ThKtiuktbV3slmGs0xzFLbTMhA== X-Google-Smtp-Source: AGHT+IHcsxUcn8preTc9iTpcswRaxjzdS4S22LBey4Xk5lFq3gP76HzVu0IIMkVly7esBhMrp+Y1Kw== X-Received: by 2002:aa7:8a13:0:b0:6db:e14f:3956 with SMTP id m19-20020aa78a13000000b006dbe14f3956mr2218904pfa.20.1706550906055; Mon, 29 Jan 2024 09:55:06 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id h8-20020aa79f48000000b006ddcadb1e2csm6116676pfr.29.2024.01.29.09.55.02 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Jan 2024 09:55:05 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 1/7] mm/swapfile.c: add back some comment Date: Tue, 30 Jan 2024 01:54:16 +0800 Message-ID: <20240129175423.1987-2-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129175423.1987-1-ryncsn@gmail.com> References: <20240129175423.1987-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E2567C0024 X-Stat-Signature: k1yx5gnim7kdfgfsjnzx7zyy1hamzsyy X-Rspam-User: X-HE-Tag: 1706550907-699957 X-HE-Meta: U2FsdGVkX1+KVK11fi7bbS93O7Eqk2I3d9P1PDbYtOd8ps/Wvbllb6rFXHR2ezxDG9xGImBSoeb93sKcaYo33ENjUwGd1PyiAzL9XVIMugBAvY6QMMRNB5JYkRbF5Ami2a+gSnTqT3t/GsNH31THWiEndHltvNbFMkViNM8/AC6WU6XkxPNhfl8k2wusjsnFTMVXHrQ/h83k1q1xQnzQ/Z8Fln2s9lu2gPGFF6OBjk5xrCquBewGLfRdDwcICdnfRPdtpwmijgrmoIjznjmHel3ZoO5DJKfhDpmC+79V8t4PCQjyc1Gw0qPc4MeMvDfKxWSU8KGs9+akHmLF6C5IHqCjVD8OHBj32BXII7ZYcmCH+8XfPHWhXVMAIdN5y5yA928D8j+kwW1GwH3UNIzdCBVySVH4pD8SbpcG7jrOIvpbi56qycTeOcUFIvCnwyfgXrXGMPUZ3LKJ7uZJJ6RzLRtJA65gMCgoU9TTjVv7uXzZvf53M8KWQ1limQyqLsUBfCSar+FN1jmSbcPCX7fT9iL8seqtJuF52eGKQMSc5kLJuMNbeUSixFHR2R19+tEsrXVG/9DP6zAAeY5X1oA3G2UFq7sx6U5vQr4dmcxqvbv6sU4386KZi5/+tJPEqRMg10YBD5oQiVTpoCeIcSWpbPXXI8i91jEPlfbI9BTDuBULr57+yHfUPnWsbw7lOn4hAvsvNxjglCOPXh3X/xM52dtIcL+WqtH30IxyCeroTQr1o2cKTJexa4bp6wWGF5APe2yRdgdmah2v1AZXvEA9LfdukBwfiIXSP+sPzeAzksycLBt0bjV42yWqyV5UZMOU7JNI1QXjLdv0oTgJg5Wh2xjTqGEuYhNeokXbJyOdDWBxMV6+9fKe6URUevY4ZBzAGW2ZgysiM6fdlV+OM6becm1wYxKGVKLmlMuKbeMxWkxDjZLWvMli3MEIcLjN8qzhyYutmJjdFts84VPX5/5 jXFAR5ED diYFnIOHAwRGSAMZDWceCgT/WDUQUjS2jzc3Yc3CWaYfHMXAOXmrWToNhtXA+m3eoo6sz94l4W7vGUWQPLRgvvkf7/O8YBoYqUOae+yfrlyC4wvyP4LwwuxL5VfDEaIrhhdNqaMys4vidzAnJMT5qPkUhshmgQPrgBEFCJpd3Hv+h9mueP+yjXs8C7fvGDidtR9oa5a0ikIdE++2mP14Idxof67KY5Kzdfpre5yFTBbYAg2W6MBp778/dATreF+4JhoAFyQMESDAZ6cuD6OWFZ5d4BHWIPK+yglH30hsuAsW3ABt2rI/ly7nPrQFzLULqxlJWK2or+8R0JMfmzukq6+s9gdIxpkXQbz2eOPcVW8dUT75xvkuZJ8DSu+eUp2GAhjIiKtMkyR1PlwirQb9SB4dOOBoCVlpNX71MJRd9MrDNyOVv9CqCHpz++pe/kgTdIx955tAKWj9IB6ls9NjH2CA2qJdmz02RBYK94EC3UKFfwzyXyi4n+M1B36ytZWdE3v0+Vw8uRpeHesJ8VdWWHKjtMsyv9UA8znUQ9V5blp2C5yWCzQz5zVBtOyd0UTTgc9gEqacn12h8vR2Mb1RVOpxzDpRDRJgrJNOx+fA0KrnJSYFF27Oo3Ipl9HSaTV/4+6Le X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Some useful comments were dropped in commit b56a2d8af914 ("mm: rid swapoff of quadratic complexity"), add them back. Signed-off-by: Kairui Song --- mm/swapfile.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/swapfile.c b/mm/swapfile.c index 0008cd39af42..606d95b56304 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1881,6 +1881,17 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, folio = page_folio(page); } if (!folio) { + /* + * The entry could have been freed, and will not + * be reused since swapoff() already disabled + * allocation from here, or alloc_page() failed. + * + * We don't hold lock here, so the swap entry could be + * SWAP_MAP_BAD (when the cluster is discarding). + * Instead of fail out, We can just skip the swap + * entry because swapoff will wait for discarding + * finish anyway. + */ swp_count = READ_ONCE(si->swap_map[offset]); if (swp_count == 0 || swp_count == SWAP_MAP_BAD) continue; From patchwork Mon Jan 29 17:54:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13536155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1FE6C47DB3 for ; Mon, 29 Jan 2024 17:55:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C1376B0078; Mon, 29 Jan 2024 12:55:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3710B6B007B; Mon, 29 Jan 2024 12:55:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EBB86B007E; Mon, 29 Jan 2024 12:55:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0C3866B0078 for ; Mon, 29 Jan 2024 12:55:14 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8C320A158C for ; Mon, 29 Jan 2024 17:55:13 +0000 (UTC) X-FDA: 81733100106.07.0D1443B Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf20.hostedemail.com (Postfix) with ESMTP id 7AE871C0021 for ; Mon, 29 Jan 2024 17:55:11 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Do3TBkkK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706550911; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7+Pb9pW7pRGXoZWLwVdMMxRSRMr0QxM87xvWg4BF/x0=; b=auP7DJFkQAJ0oJSQ6gstcBSCb3ZuhUElkJP96W0faIZfvzFzregs4CBbxpL3kpr1IlcZEZ vjteEZW/sjLZQcQ1mYY71Y3uyUeprYJzk0/aLjOXeLl9DD1uiMQD7w3sbJ+Z9NuTbuRnwu an+GNwB/CsSS6Fqkp10imJQpAD5GbEQ= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Do3TBkkK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706550911; a=rsa-sha256; cv=none; b=BBp+umVyClSh5t3ARrfigT82heDWi9ELn7TEO/ySkNacFbWWsHkDpYRP1+kXEL+2ZV+ryf aAUSufUpy9+tGjYkcaHolpgF5/z4WUerbHqLFcmHCJcmSrMYIwpUyty1rWs0ZaM4yyKEb8 ImyqfwUL8Ck/NempL6M9v5BxvqxS1bA= Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-6daa89a6452so1434111b3a.2 for ; Mon, 29 Jan 2024 09:55:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706550910; x=1707155710; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=7+Pb9pW7pRGXoZWLwVdMMxRSRMr0QxM87xvWg4BF/x0=; b=Do3TBkkKlfyu5zdamd7z7KeMYsU5tqoRjcRI2+qwbpBY5GMoBIPAc3/T00WM/Idtny gcjCM6gp16Qr026NsYqQclG1a8kajIZny0VwlKO0sOwmGZ4pu6dcrRFvYROZZPI+rQlZ 0cDLpFbHWsizcdJbmLLQ3dAa+FfCEULUGkXaoMHE5U9WTnfZ1Y0v7kFPbxODdoZXmfCF cvEE6RcGTuLM7rJWm37iC1/HXvUchO509XBrsq8iUvNZEnabbNvP8lkc62nfWut4zVA+ I2xlDMfSh51yVfGaNEtXo2kjOf+QEQOuoMf03ExYsVQwtoRGww/ze+pEpwaci2ChStej TmrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706550910; x=1707155710; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=7+Pb9pW7pRGXoZWLwVdMMxRSRMr0QxM87xvWg4BF/x0=; b=FKDTxi0pCHDyEKQWQej9HDUcyWakvpR23M7j/hzA6StnaW0w4/A7tj0bATRToKcTGm mLfeypS/z0asoYRro8X3T51/wojsWYkj0WioxCytYJTy2cM/ibm78XSywU6fu7VF5KQG EXvYV46CC0gzvHQmml/qbpHydQJzUKDdsPIpz2BreK1xtsF6NO0u5M9grEnzcZ4G9rF0 /s+6ahwHP5DPKaPu03qUSZfKNiDzQ/Iux2NsU+FEbowLlfiUr22XyVCwQEx7IQy64p0z 5L/bo5BgWCsPY7vRHvjjAMtN6O3VY0MIG37ykuUUSi0Eka1kgN8Y/wnIA89Nu8rXLtjK uhVg== X-Gm-Message-State: AOJu0YzHSd07GX97pwE0QZEQoBfToy5N7M3Lgk4cm8ZLrYOve7QG2LLY bmSBxwLpi8CdENVr9akkJmNvxrt23Oi8vR29vcw0tPM5aB3fshHW8IQyX/sMPJ+DHQ== X-Google-Smtp-Source: AGHT+IGbEgLtee7lvyw4iD4o6YyT7UJepc/uZ6izKXgvLtPBgmrhSNa0jHfm+bBxa5TucYAtg4TF4g== X-Received: by 2002:a05:6a00:3ccf:b0:6da:cb36:6c00 with SMTP id ln15-20020a056a003ccf00b006dacb366c00mr3552820pfb.15.1706550909761; Mon, 29 Jan 2024 09:55:09 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id h8-20020aa79f48000000b006ddcadb1e2csm6116676pfr.29.2024.01.29.09.55.06 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Jan 2024 09:55:09 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 2/7] mm/swap: move no readahead swapin code to a stand-alone helper Date: Tue, 30 Jan 2024 01:54:17 +0800 Message-ID: <20240129175423.1987-3-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129175423.1987-1-ryncsn@gmail.com> References: <20240129175423.1987-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: 7AE871C0021 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 3wqen39jwf5maa4o3nsr1xtsx9debzcd X-HE-Tag: 1706550911-262193 X-HE-Meta: U2FsdGVkX18LgwleJNA1MBJAAbWdLKc5p2975RzERkApeqhJCy1LNxuGz6ZBROPYrPoM/wVHMDd18vn9c+cdnaK/Ink5+fKBQxoS5WDX5QB98OdzVWQHpucQfDB9BaMLKbgwYljIqGtmN8JZW9vdqLB48yhTfumoJTUCmjo9bPuA8doikUcyxRaa1Rxad4y4upqWYAB8nH3V33miWMY0l6VmQhO8bLY6y8grdw18paW2bnMr2aKx+ItPiHnNaWS0mxtbRcbx2J48OzimUuS0LT0w1u4qMoicPP8d9uK9cAh3z7wPXERBtTSCjEyHpzPqwEiSXH8iiWr9wWDFgptQdLRHm3gtQzan3g7C9BHntz1eCeOOQNBeffjE+2hyDFiseOet+x83mCDhcccRouQ2QVZ0DgdEo8Uq27KoH7Cqjkj3Ex//tJQrvC/1O8Z6W2lZuTqGLCK9V+xcaYpiGTg2ciCPYQBViTugMdzoS+jOTq00KJ1pOFbprG51gkc2WiZ0EbIBWObElzPGSlKUTLnJ7i7LTmJej/csoB9UGHTXY2DGWWh4FIRa597vXhOh1QnnhYCZTEiYGLz+Sw2hXYX6u+kRm0GWwmgqFabveHM9wIWniDTWorCy8xlVoYJlpZeDjOAKwx3Qw545Ec3wjabq1ZLchkAQ1bTDh+6AKmD/sgQVWAK5KnLhq+o/EsGvrY0BPRDlcDgFnrnliLcemG8NM33hh0Ux4TsG0kRRcmyfr5qyWz6Xv0RyEXu0GzhOOKHfTszv5H0uK74easCHugPxA7Bl/kNmmFWyrrVi25785p1+T8TaT7KP9/+SoecUehPdeMe7+aHt8UHkbWhdBowJB9pzR1wMrySVoTutRCfktk6jc+TgjBAjv9MwCAzVVtWqqQ8fAPAdfBfhg4gpHUXJSx3ar4aGkR0E/G73LnrL78kXXeiXP248ayB2AUzc9H/J17IaoObL31pNVxV8Zlw hJrizgIJ 1M5QhQLSWKCZlW1CzzkFZnq1iZKZY/DfD/HWmI3Pzzscjp4j3KzXmsK1QKzo0NQuT4aYD2SpzUX0RZLPeLVCYCLL3u9GVlv7ETLMKG91e4R0HdLZ72qOxagPABmGVD7xyMAoWKp6uVgf40mfX6CkZrMPBG16O1QcKi6E3mqNxKNs8jv5p5RA+sbHqQyZKX/WLSUsyBo4ojXB75bYXnqB/5P8s8DhjmAfgO26LHEPtLKintiDncj0orC2b/f0cbAUe6fOmJKipAd5nJRACJDmaiC2d5T3aCcAAZTewKiLF5Mbq+cuw77nJIdgN1Mw8PMLW1MC/du9FXwV8d5YwttaUd7Zs7uQ17ML1UtI8A8GBWa/pwP7zWysYw+Kl+OHI7IpYs1D+ff3eAvFa21knE3qdqY25DnC3Vf+EshKDsASNy6FEcV4FriQcZ+y42gPXmItiQ0ks6a2HAM+E3qQMJlFP5g2KMFcqSsjrc6s9eIMHexbAe7IojypbbcjmQKaIDj1e3W6weDQVI91UXFLoIPneFutnh7bSZpyqYgB94k7AA7ai0w9m6un6tPNeL9/KITqbdCB/l2z8VEGFRo2lw1zmns9ipWf17DA6XQ9/7vrsIH6SeIHN6nL43i5MKzmDf413IN90 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song No feature change, simply move the routine to a standalone function to be re-used later. The error path handling is copied from the "out_page" label, to make the code change minimized for easier reviewing. Signed-off-by: Kairui Song --- mm/memory.c | 32 ++++---------------------------- mm/swap.h | 8 ++++++++ mm/swap_state.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 59 insertions(+), 28 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..81dc9d467f4e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3803,7 +3803,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swp_entry_t entry; pte_t pte; vm_fault_t ret = 0; - void *shadow = NULL; if (!pte_unmap_same(vmf)) goto out; @@ -3867,33 +3866,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { - /* skip swapcache */ - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, - vma, vmf->address, false); - page = &folio->page; - if (folio) { - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - - if (mem_cgroup_swapin_charge_folio(folio, - vma->vm_mm, GFP_KERNEL, - entry)) { - ret = VM_FAULT_OOM; - goto out_page; - } - mem_cgroup_swapin_uncharge_swap(entry); - - shadow = get_shadow_from_swap_cache(entry); - if (shadow) - workingset_refault(folio, shadow); - - folio_add_lru(folio); - - /* To provide entry to swap_read_folio() */ - folio->swap = entry; - swap_read_folio(folio, true, NULL); - folio->private = NULL; - } + /* skip swapcache and readahead */ + folio = swapin_direct(entry, GFP_HIGHUSER_MOVABLE, vmf); + if (folio) + page = &folio->page; } else { page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); diff --git a/mm/swap.h b/mm/swap.h index 758c46ca671e..83eab7b67e77 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -56,6 +56,8 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); +struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf); static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -86,6 +88,12 @@ static inline struct folio *swap_cluster_readahead(swp_entry_t entry, return NULL; } +struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf) +{ + return NULL; +} + static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask, struct vm_fault *vmf) { diff --git a/mm/swap_state.c b/mm/swap_state.c index e671266ad772..645f5bcad123 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -861,6 +861,53 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, return folio; } +/** + * swapin_direct - swap in a folio skipping swap cache and readahead + * @entry: swap entry of this memory + * @gfp_mask: memory allocation flags + * @vmf: fault information + * + * Returns the struct folio for entry and addr after the swap entry is read + * in. + */ +struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, + struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct folio *folio; + void *shadow = NULL; + + /* skip swapcache */ + folio = vma_alloc_folio(gfp_mask, 0, + vma, vmf->address, false); + if (folio) { + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + + if (mem_cgroup_swapin_charge_folio(folio, + vma->vm_mm, GFP_KERNEL, + entry)) { + folio_unlock(folio); + folio_put(folio); + return NULL; + } + mem_cgroup_swapin_uncharge_swap(entry); + + shadow = get_shadow_from_swap_cache(entry); + if (shadow) + workingset_refault(folio, shadow); + + folio_add_lru(folio); + + /* To provide entry to swap_read_folio() */ + folio->swap = entry; + swap_read_folio(folio, true, NULL); + folio->private = NULL; + } + + return folio; +} + /** * swapin_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory From patchwork Mon Jan 29 17:54:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13536156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9366EC47DB3 for ; Mon, 29 Jan 2024 17:55:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0DAAC6B007B; Mon, 29 Jan 2024 12:55:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 08AC66B0081; Mon, 29 Jan 2024 12:55:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E45D16B0080; Mon, 29 Jan 2024 12:55:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D39916B007B for ; Mon, 29 Jan 2024 12:55:17 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 91515160B11 for ; Mon, 29 Jan 2024 17:55:17 +0000 (UTC) X-FDA: 81733100274.05.A980379 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by imf08.hostedemail.com (Postfix) with ESMTP id 9FD12160016 for ; Mon, 29 Jan 2024 17:55:15 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=b0jDCtrn; spf=pass (imf08.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706550915; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4HCCIzc96WSKi6JLRF97+BAewUvCrXyaCTHCA/v+eQk=; b=WFevbxWRCZ496HRO4IMkgnlHKBgPituBjZHmlJ5ZgDr+BAFK5D7RqfOSOSOjbEYms+LGL3 27RNC/qlseJCJjIBkrjld6gHdvi1krU/UXwrxFkpK0YdvNewZpM5ZSi6f7u6FSrBz5Qx83 MeravZxEU+AWOe/ILtdmv+j0uAJeIYg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706550915; a=rsa-sha256; cv=none; b=6EgbeAU6Z65HcsCL0fumgubLCKUHaTYDck2xqHwSPkCeNtXDQz2/TMdT7LAULeeUgHS8AA /+/L9ymFpPFYaPDWsiCbhqpXAVmT5xswqh60Qyt19bsgFFcssXV4hbqyZL3emJxiQnzYKC ZMfrSSldTMCGBptAl5QdUIVZKXg8JKk= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=b0jDCtrn; spf=pass (imf08.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-5d8bc1caf00so1591296a12.1 for ; Mon, 29 Jan 2024 09:55:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706550913; x=1707155713; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=4HCCIzc96WSKi6JLRF97+BAewUvCrXyaCTHCA/v+eQk=; b=b0jDCtrnzpZKwYsb/vCXuOI/RAECPrpZawMaRUXIqQEm3eBss98uWbO+hXXcYcgddn XISmlQtByHv3aheKArPQkViL5DxEP+jpxQAF0DKSgigMUJ99WSv7OqFGpW80jAc+vyvJ FQk3tE7aHbeV96kWlqSGapxuk4KE9AyN66MgBPWRIbrZd+5sJKTnADJWGPkjeESysM9M WtXS4PiXeZaJBT0SWi+aZHlK8hvGsCTu3I/FxZNMB1QRafg1JZ0h58Bfc5QoWVmOK2GJ cdXoFqjqT2D0BC7OOe98g+UOt4PtEE+g7kQLQC2pjO+AaV39AaPNm+Y2MgAW0y/BztAR PlgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706550913; x=1707155713; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=4HCCIzc96WSKi6JLRF97+BAewUvCrXyaCTHCA/v+eQk=; b=KnUvVTaje19+QWNLPkZV4+WVTwRx9QWuic7guIZOyYRb1DF/+Dg1dK6IvX57mFfhJ3 CVOM33Okf/QWysAKszag+E5P3mJ+jMtCUb1iPuoiJrFidzGVvyDzK8ZK4kFOibitn/t3 uzMPEhuvgFEYNZlJiSW84uvQY0xLJrP/HHWZopdx/MpflagcjvWiHswk1ZohysYAxehg pXDoLy3zZPGPQ36p9VTYZ0GJ+cufDiG72z3lAvMDZcFr/fCjRJJI7lHoozrjudtGuN9E CFs0k5EyHOxU3EGwpj/1MvkW2pJuwaDD+o3ypp8TXA9ytaPAH3RgOGhNEuBCVkbv2Eah P7qQ== X-Gm-Message-State: AOJu0YyVBCcG0Co+d47gfjIBX7BEvTXfk0BGLliqCBgiwEMlDs+w4kKE YMfSdiYt44iSoJy8/KBmAuDuYnL5Cm1yGpiBxC8kYtTjlGV2VG3B0viAbrBHzHTSeA== X-Google-Smtp-Source: AGHT+IFgfEMwLjjlfjYQs4zDPrkINQJQhzD4VfcEfeCzLRYP1ozN1jTtfQN+mykLWdBBe4DQTBUsWg== X-Received: by 2002:a05:6a20:438d:b0:19c:a398:4a67 with SMTP id i13-20020a056a20438d00b0019ca3984a67mr4049800pzl.55.1706550913483; Mon, 29 Jan 2024 09:55:13 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id h8-20020aa79f48000000b006ddcadb1e2csm6116676pfr.29.2024.01.29.09.55.10 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Jan 2024 09:55:12 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 3/7] mm/swap: always account swapped in page into current memcg Date: Tue, 30 Jan 2024 01:54:18 +0800 Message-ID: <20240129175423.1987-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129175423.1987-1-ryncsn@gmail.com> References: <20240129175423.1987-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: 9FD12160016 X-Rspam-User: X-Stat-Signature: 9qmikjwfq1js6adexxd3rwoci888okmw X-Rspamd-Server: rspam03 X-HE-Tag: 1706550915-261970 X-HE-Meta: U2FsdGVkX1/Xo6Un5rveoP/cWWhTL8fwrrOeNo+nWN3al3cjXoXX187uSDizZr3RFiwjSrd+0zHHamJdsmtn7TFLpyUPC0IJnq8oQ/PmrMng0PE2+JsbNrW5gWdp0KKiTX+l3UaV4v98n1bgGKCjxGteg26LkHeuYYwHVRCabAIkBmmYNLi/EUx1vIXCju87pqvF5Wfb910qgS6h7z82TUX8n6boAsbQ1Ea3P73+wORMm8wSy2ImgsdJHvtEt20nP78R1mC/KCvxW3vQtOX5nfkP4bVCjzTD0rcuZPCnX3/qne3klHefrYrgC381xv9RhhOZ+dYFiUcAEHQaD955evNOQVxGf/RsWTul9uzPIn4BWXNExLYCyK9bRjItcm+KG7iKBcTN4CjLWQStaIo5paVd3dGUHoTF8vLCxls8/RCbZsjQ6lJHSLW4yhk/EhhvyE6pBHoWmX3+Mxpl9C46UmIQ16Q/eoNOnq/TjROVnX8PR86nQXAO4XeWmnh/zrbb832j7FR349bArIwYgLpGA6t/rJDrcrtolF9FRwdwO7w5q957a+MtsycOBf0kbaKvrYaxHkKMoreNafaK2HHnC2atVSbl2DUGVGmuJqagRPvGUcEaCy2WfWGOtlB3YOsST+0PlHIUK60BOpvj2zTUEUBMkDJpDDVOpu8eeSZ6+MjDc8NQo7CzsyTMKe3CFDSwndmwmzqk0mggMugDZmiDmOO8luOMLpwQzBaK1J0wtwZbNHwg4znSfFEQg2wqikZ6/BLdiGURRiEQ1WMKRL6UO4KOtF+5vH8BJCMoWxi/Vp/DX3YO/mvtuMX2Ik2aCMai1RZ0RNZE/j3zNsPqRNJE3REC0TatfleLrAC3H5a8I188zh7V6BegisZMMI50LBTwsvT9Ksb8/sgfp+JWcfkMjIaYiqw1d+eyQO0XSGzEiob+ZG237Chigz6cvc5yP9wX0Zt8bAEkFiQdXtFEojM Fa7WwAU/ JDUILTiUtcVr8GCzy63pnMdFf2f3eAKp5sLnPDX9xAz2vb/NIw9Sf8UMsU42Mq4zKgsQHKOSiG6pXJ4xR8v/HtNmUcIgCreNd/TZgdw4V8MUeRTnQ44VhPQdh4zxJ2eWtpfS9v7XxZxq5u8Cvb1s7y/4I5QPF+9LzySAUw3bY85oC2hdKZlY34lmdqjpt+kBPx/57LeEKpk6YVhI96tZruNxKoXQ5DeGnay5m66f89WaXR/mxhHkHXuGbgn/RPbFjEeqtizOuQCFcE1eNrObxApyLb4AnL+fLFZzTh9wuVrqnivmGTjXduh5jVnqNT2N+QUlw0AkqwU0GPq91tUfeGz0OVndZhIa0GELGjkwC5Fxf75gCaCaiX/II8L9AuZRwE5B95lkPWA6YXOduf4hJ3nrY9tkZwEXyjOassj3AQWg30ODkleASif4ZuQuPLEyzv2NQFRoanK1zCzHfvzaZN1iZRxylvXl/dw98VjskwZ7A1Y4pb/Gi8tikL+D1DxjuLSo809f3Rvs0SGPp4aaq9dZaLQ/gHIeX1rP+mIInS3VVBiq5iFt2G865jza721ARn/bhl5affTp3W94S/Mng6KRr8QrO/Tt3P+8ZGlQtBpDhJ2c7S8PU5PBsrDWkpAfhdUrf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Currently, mem_cgroup_swapin_charge_folio is always called with mm == NULL, except in swapin_direct. swapin_direct is only used when swapin should skip readahead and swapcache (SWP_SYNCHRONOUS_IO). All other callers of mem_cgroup_swapin_charge_folio are for swapin that should not skip readahead and cache. This could cause swapin charging to behave differently depending on swap device, which is unexpected. This is currently not happening because the only caller of swapin_direct is the direct anon page fault path, where mm always equals to current->mm, but will no longer be true if swapin_direct is shared and have other callers (eg, swapoff) to share the readahead skipping logic. So make swapin_direct also pass NULL for mm, so swpain charge will behave consistently and not effected by type of swapin device or readahead policy. After this, the second param of mem_cgroup_swapin_charge_folio is never used now, so it can be safely dropped. Signed-off-by: Kairui Song --- include/linux/memcontrol.h | 4 ++-- mm/memcontrol.c | 5 ++--- mm/swap_state.c | 7 +++---- 3 files changed, 7 insertions(+), 9 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20ff87f8e001..540590d80958 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -693,7 +693,7 @@ static inline int mem_cgroup_charge(struct folio *folio, struct mm_struct *mm, int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, gfp_t gfp, long nr_pages); -int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, +int mem_cgroup_swapin_charge_folio(struct folio *folio, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); @@ -1281,7 +1281,7 @@ static inline int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, } static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, - struct mm_struct *mm, gfp_t gfp, swp_entry_t entry) + gfp_t gfp, swp_entry_t entry) { return 0; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e4c8735e7c85..5852742df958 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7306,8 +7306,7 @@ int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, gfp_t gfp, * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, - gfp_t gfp, swp_entry_t entry) +int mem_cgroup_swapin_charge_folio(struct folio *folio, gfp_t gfp, swp_entry_t entry) { struct mem_cgroup *memcg; unsigned short id; @@ -7320,7 +7319,7 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, rcu_read_lock(); memcg = mem_cgroup_from_id(id); if (!memcg || !css_tryget_online(&memcg->css)) - memcg = get_mem_cgroup_from_mm(mm); + memcg = get_mem_cgroup_from_current(); rcu_read_unlock(); ret = charge_memcg(folio, memcg, gfp); diff --git a/mm/swap_state.c b/mm/swap_state.c index 645f5bcad123..a450d09fc0db 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -495,7 +495,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, __folio_set_locked(folio); __folio_set_swapbacked(folio); - if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry)) + if (mem_cgroup_swapin_charge_folio(folio, gfp_mask, entry)) goto fail_unlock; /* May fail (-ENOMEM) if XArray node allocation failed. */ @@ -884,9 +884,8 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, __folio_set_locked(folio); __folio_set_swapbacked(folio); - if (mem_cgroup_swapin_charge_folio(folio, - vma->vm_mm, GFP_KERNEL, - entry)) { + if (mem_cgroup_swapin_charge_folio(folio, GFP_KERNEL, + entry)) { folio_unlock(folio); folio_put(folio); return NULL; From patchwork Mon Jan 29 17:54:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13536157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EF60C47DB3 for ; Mon, 29 Jan 2024 17:55:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 288E66B0081; Mon, 29 Jan 2024 12:55:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 238996B0082; Mon, 29 Jan 2024 12:55:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08BD96B0083; Mon, 29 Jan 2024 12:55:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E7C636B0081 for ; Mon, 29 Jan 2024 12:55:21 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B7ABB160A23 for ; Mon, 29 Jan 2024 17:55:21 +0000 (UTC) X-FDA: 81733100442.30.4894C3A Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf08.hostedemail.com (Postfix) with ESMTP id D5324160016 for ; Mon, 29 Jan 2024 17:55:19 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Xh0igg0c; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706550919; a=rsa-sha256; cv=none; b=XLJlQvPf+rmHHTEovD1DFtbT7q/YjuN3WZ74mkdzWVpZXqb+38dv2G+4ul+O2nWDjKPe/K vhI0fYgjtoTBx0FlLz0h/SI5AIkhJj9XYNXnDplbWZYrycUNqdVGnNJ3tuqMO+g6i+ZaAT xYpFjvLSPEw+M9a/LTzz83wBSzYu7+0= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Xh0igg0c; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706550919; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZEaJsg6FaNgmWF+uyO/aaFj/erl/Uc0eXE/63t3Htcg=; b=B76+H8kU+S6ZUC8zKGIdxUItE+ugksML7ZKaRcpkeoU6Gz6wesaAVInlSVUaOyDJVWKvrQ 0EKjgub0qDMcsCxJJ3hXmXpsOj5w7f9dY4kXOtfnQ6fgyPTWnhZU+SxXFfTb4SiRlvikVL RXTwrKO8jH/4Il/JZZdk5ycpKQxIGX4= Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-6dddc5e34e2so2065607b3a.0 for ; Mon, 29 Jan 2024 09:55:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706550918; x=1707155718; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ZEaJsg6FaNgmWF+uyO/aaFj/erl/Uc0eXE/63t3Htcg=; b=Xh0igg0clZJayq0Ujr3eantOKRTW7MqEfCpi0pJCTsbmzy+WZzKXnG9kk/ODvP6974 3yT2LKcDjBxizV5ncbneeAuRmsYh9aysUClPo7YzuFf05EeXKSH5sdeoBeE/I0XNZoMF 5LQe1rsjP9H4T3JdLiWqv+KDzWgIvmKvi8dtup6egNGXx45oR6hDWXsaF4fAiE3wzF0D ORMskrypTIv+jkzT+v2deQEi9F3zMIGy2EZlImdqogrZ/DkNKcq6Jey75VWhWSqRYxBD yI4yhm6KhGI1nmTFmX+2RoYpV74/DdiDNDDcydiDc67gydyU89XGf3TDMLEjiBVg9g6e Ez+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706550918; x=1707155718; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ZEaJsg6FaNgmWF+uyO/aaFj/erl/Uc0eXE/63t3Htcg=; b=wuaHvc5KBBCWYUF/ptL+/lOIzQDd0oJXor1fKPrW82hAXLnhS3bDetBTpm7Eg07BF6 ftDT3DOwsUm2oOZ2REGjbPgGUGqHdTU0NE7K9OENp7KWYxnyV3f+QE00Xhzvhrj5Rg9y Vs/aQmfV09qs41/Rf0rGZciFk/Se1tju/JN2FsGbtsAy9fyN/w0yly+UA1RejIbQbzFi PtJD2AZ+rBM2Xe3rIt9YcZVarYavMqf2lbgbXf6SaHB1VzP8Ds+tHe4cOg3RK97xAhWD 9cXerHSxrD2UudYwwnDTVlKt8LO99fKeD9pX+stY3LcyvNOvCYcznber/VTtFhGqVdkI 8nWg== X-Gm-Message-State: AOJu0YyLdSR9daTmvOMO8Z+Nck2j+Ax2cC8tq+e93mcqG2KMmWSpOxjg KyBLzv6Vq3bhHEkA5/RjcSQfYpE+bdtaqgEjEldRMySGQRGrgldOHT1EtZ8gf9pD9Q== X-Google-Smtp-Source: AGHT+IHYv2LHBpglkk4ViPSn6YlurzgUjDbn2zS3N+K1h38tw1crAQqtxAPY8gv9wn+50sarKhsTLA== X-Received: by 2002:a05:6a00:2d96:b0:6de:2470:92e1 with SMTP id fb22-20020a056a002d9600b006de247092e1mr3531769pfb.15.1706550917544; Mon, 29 Jan 2024 09:55:17 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id h8-20020aa79f48000000b006ddcadb1e2csm6116676pfr.29.2024.01.29.09.55.13 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Jan 2024 09:55:16 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 4/7] mm/swap: introduce swapin_entry for unified readahead policy Date: Tue, 30 Jan 2024 01:54:19 +0800 Message-ID: <20240129175423.1987-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129175423.1987-1-ryncsn@gmail.com> References: <20240129175423.1987-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D5324160016 X-Stat-Signature: p5famw3aoxtbqhb9dmbu4onjtuqyjipe X-HE-Tag: 1706550919-60004 X-HE-Meta: U2FsdGVkX1+bospvjMeUERl+pE1GBK+dsW4tWnQh+uAzOCJdTMNU9cus1y63s4x5hFE+zG3MLwyvYVaznrHINrX+v9tnTOEiJxyorsvU8wc/M9jDZ7uXLwVxSl5J3VBEd8znV7VtN7kBSo1bd49O3MU8SWtrakqzFN8x2JsYbyUMfl/DdYfkZrS9hLlb1HuVz7udLSR83grDVBmteVYGnx4oRro6RDKGk90GuNvo5lQTTsBAY1JrDVbWX9EMp5Z8OJPkC/tVk1+Av7nVlE08LChHnnpU5xMCfvwfaUDENcUWCQCvqKkaYTU2/xTyx7UlGB440jsm4lsCPKOzrMZ6rM7u0lrsfS8U7/h19CcO4TBIc7a9nXd47gvd0VwVYgqyKd488ymx7zKdQxmc3IWkaKzMRZQjhEQOyRkXGXroBLpbi5HTrhj4LS7qPshPktbB4ZacvkyoeNcSe10Y+G6OgEoUH9gDH00+bqB71GV6VAIa87KD8siQ11MnH1qOdvjOmloBFqHo2d370G372QinVW8QlXByD7xIqPAP47zr/1NJwvy389C33CrauqIDOM2yFPVPLUeL2zg9QPzRjkfTPd7T5Fw11UfhCeKJapdmrS/OXT83LtBo8xv1mpUbB4tedYucPPuapOuHSFGDGavCn4bIdot1lu2h2SbCYR9HXQ4rv3GiLOnT7E7QnO/VdNa93Gi9Qgb1G5VavFlABLI3RJAZjmoPWFkpqjgwpxOVmsuosx4sa6alZEFwn+z3HLcsE4ylSf1QojafOBjgFTL6MRdqWbERg335MA6yX3A/1RBCNLZY5vjms4vH75rd/W6mci7YbTOkIwpjyvjmaK9itdR7OWQhRW7gRXxR0oww9bfOSVTVp1r3cq1PU+FqBStVC4a6yE6S/LBALxO1A+taGnQ7vt321BSLdE9NYlkl9cOWa1G8OcBOqnSpwnfy/ca516i/WoqiUebSn0Otg0n 3UoxxsDh k/cMgFshYdCb1HUv32rjN0VLtlHI+AoNSjRMLpx7FtZwI5FrFX4idD2PEbh3rA/b02InqcQJlLMdl9DIXs6Ys1tMWATxRXZpsTp0YduPx5KGLMgJYv0rGheX6olYR8TcaviyC6iEusQfYOAA3YBWCCk7dyygo7m08Y2HZUabdChy94IteIWoyLHAFC/IoWR9g1qaA5OIz+5AtolQ3xCjMmU5lNw3LWRa5jNFr+iaqanO2RvBbNa2ukWKb5iJzmGUNo7pWpfJ2fL9aRc368YWuUBFa5hwHOOLjPJN3jfsjGDGN9+BwSwt4O6xn8CEFGZA4U10/hIGnboIcVf1nFsNRC38tcuLmJM9XsjwK/xJToO0bLn9zScI8qM8k4YGZQVMcM4gw5hPQX3ioI9xH0rMlg4j9BGhutk62D/CSeb8fR43uqlYJun4HF51i5TSewUpdoc+WWgETPnQJarSx0h0EBXaF2gx/HKThiNKHHdlvZ66riuweZR525zNT8G59RgNcIXnARAsjW783V8nlUi5oyIdbh7Yza+fSMKOPg4oKeCmqzMC+HgbUp9X3P8Ytjv3PTHkDazwXNB23YjOWwSvy1IGpYtd7+9n0zV1aDt2ahIedC0RPVc/0q9w21cEr0RUmQ/UjtXlVCfJzWg5B0ecHQSOTY2j2g6f5IUs+HacODdDjxms= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Introduce swapin_entry which merges swapin_readahead and swapin_direct making it the main entry for swapin pages, and use a unified swapin readahead policy. This commit makes swapoff make use of this new helper and skip readahead for SYNCHRONOUS_IO device since it's not helpful here. Now swapping off a 10G ZRAM (lzo-rle) after same workload is faster since readahead is skipped and overhead is reduced. Before: time swapoff /dev/zram0 real 0m12.337s user 0m0.001s sys 0m12.329s After: time swapoff /dev/zram0 real 0m9.728s user 0m0.001s sys 0m9.719s Signed-off-by: Kairui Song Reviewed-by: "Huang, Ying" --- mm/memory.c | 18 +++--------------- mm/swap.h | 16 ++++------------ mm/swap_state.c | 40 ++++++++++++++++++++++++---------------- mm/swapfile.c | 7 ++----- 4 files changed, 33 insertions(+), 48 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 81dc9d467f4e..8711f8a07039 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3864,20 +3864,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swapcache = folio; if (!folio) { - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && - __swap_count(entry) == 1) { - /* skip swapcache and readahead */ - folio = swapin_direct(entry, GFP_HIGHUSER_MOVABLE, vmf); - if (folio) - page = &folio->page; - } else { - page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - vmf); - if (page) - folio = page_folio(page); - swapcache = folio; - } - + folio = swapin_entry(entry, GFP_HIGHUSER_MOVABLE, + vmf, &swapcache); if (!folio) { /* * Back out if somebody else faulted in this pte @@ -3890,11 +3878,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ret = VM_FAULT_OOM; goto unlock; } - /* Had to read the page from swap area: Major fault */ ret = VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); + page = folio_file_page(folio, swp_offset(entry)); } else if (PageHWPoison(page)) { /* * hwpoisoned dirty swapcache pages are kept for killing diff --git a/mm/swap.h b/mm/swap.h index 83eab7b67e77..8f8185d3865c 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -54,10 +54,8 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags, bool skip_if_exists); struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); -struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf); -struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf); +struct folio *swapin_entry(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf, struct folio **swapcached); static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -88,14 +86,8 @@ static inline struct folio *swap_cluster_readahead(swp_entry_t entry, return NULL; } -struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf) -{ - return NULL; -} - -static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask, - struct vm_fault *vmf) +static inline struct folio *swapin_entry(swp_entry_t swp, gfp_t gfp_mask, + struct vm_fault *vmf, struct folio **swapcached) { return NULL; } diff --git a/mm/swap_state.c b/mm/swap_state.c index a450d09fc0db..5e06b2e140d4 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -870,8 +870,8 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, * Returns the struct folio for entry and addr after the swap entry is read * in. */ -struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) +static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, + struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct folio *folio; @@ -908,33 +908,41 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, } /** - * swapin_readahead - swap in pages in hope we need them soon + * swapin_entry - swap in a folio from swap entry * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vmf: fault information + * @swapcache: set to the swapcache folio if swapcache is used * * Returns the struct page for entry and addr, after queueing swapin. * - * It's a main entry function for swap readahead. By the configuration, + * It's the main entry function for swap in. By the configuration, * it will read ahead blocks by cluster-based(ie, physical disk based) - * or vma-based(ie, virtual address based on faulty address) readahead. + * or vma-based(ie, virtual address based on faulty address) readahead, + * or skip the readahead(ie, ramdisk based swap device). */ -struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) +struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask, + struct vm_fault *vmf, struct folio **swapcache) { struct mempolicy *mpol; - pgoff_t ilx; struct folio *folio; + pgoff_t ilx; - mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - folio = swap_use_vma_readahead() ? - swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) : - swap_cluster_readahead(entry, gfp_mask, mpol, ilx); - mpol_cond_put(mpol); + if (data_race(swp_swap_info(entry)->flags & SWP_SYNCHRONOUS_IO) && + __swap_count(entry) == 1) { + folio = swapin_direct(entry, gfp_mask, vmf); + } else { + mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); + if (swap_use_vma_readahead()) + folio = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); + else + folio = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + mpol_cond_put(mpol); + if (swapcache) + *swapcache = folio; + } - if (!folio) - return NULL; - return folio_file_page(folio, swp_offset(entry)); + return folio; } #ifdef CONFIG_SYSFS diff --git a/mm/swapfile.c b/mm/swapfile.c index 606d95b56304..1cf7e72e19e3 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1867,7 +1867,6 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, folio = swap_cache_get_folio(entry, vma, addr); if (!folio) { - struct page *page; struct vm_fault vmf = { .vma = vma, .address = addr, @@ -1875,10 +1874,8 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, .pmd = pmd, }; - page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf); - if (page) - folio = page_folio(page); + folio = swapin_entry(entry, GFP_HIGHUSER_MOVABLE, + &vmf, NULL); } if (!folio) { /* From patchwork Mon Jan 29 17:54:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13536158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9220EC47DDB for ; Mon, 29 Jan 2024 17:55:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 214EF6B0082; Mon, 29 Jan 2024 12:55:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 19CF16B0083; Mon, 29 Jan 2024 12:55:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE2B76B0085; Mon, 29 Jan 2024 12:55:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D3DF26B0082 for ; Mon, 29 Jan 2024 12:55:24 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B0C1A807A8 for ; Mon, 29 Jan 2024 17:55:24 +0000 (UTC) X-FDA: 81733100568.28.43769A8 Received: from mail-oi1-f173.google.com (mail-oi1-f173.google.com [209.85.167.173]) by imf04.hostedemail.com (Postfix) with ESMTP id D471040021 for ; Mon, 29 Jan 2024 17:55:22 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TKk26o8W; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.167.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706550922; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y3NzQYMmpvHTXBLoyc702W5Zl5rwxoKujBWnFPZfTzI=; b=5gs+MWJE6Fkic3Ox2TK441V6hit8XoWVPKkUhj7c85v2WufOW1Hv9ZndL6k3Zx0iX9rP3X n00zjbSrbxFFobnBBCZaNkUylDGcIKIxMrMtgYql4+r9LlCg7jytp04HJmp2gf/e3iJFEI eUaX6pr8xO5A2gSBy7NZmJjcSXaceEU= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=TKk26o8W; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.167.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706550922; a=rsa-sha256; cv=none; b=HDmj/5p//TSzZLXDOjjByJ7plh/L9lsceJ1MFVvuH5Y26nDMqUxrG8ZXMMgTqeXKG9n8nW bD0lWNwthsdIXHtV8+CseTdwfW/oxfq0HeAAEDa8/YuhzKHD8Be5Ei9Mw6x70HcWWQKLs7 aHknKbMAZP0jHdIievKLQGs4XMtK2sk= Received: by mail-oi1-f173.google.com with SMTP id 5614622812f47-3bbbc6b4ed1so2321398b6e.2 for ; Mon, 29 Jan 2024 09:55:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706550921; x=1707155721; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=Y3NzQYMmpvHTXBLoyc702W5Zl5rwxoKujBWnFPZfTzI=; b=TKk26o8W4sKYJuS1a++4irMMs8axDJxBEkNHtHU725Uo7En41jzAe7ErCL15Nrxy79 bvCbayzSGRZNOZxp1YtAAQwXh//x51qckwp4/AUfTfpHoy565uMljGqzeS4k85SD6vwR BXjUyCCXjl3FSs+WZmmJvRXpzWKyV0M/826IWCGG5zM/fPyol56G/4XaJNHjo8j53GhC 29GUt/LOA3vz7fZc68tWk1rgPI83kiziGWoLTAjdKfqnNYOD0s5KVkW47jVz3HxeJHn3 yOctdMJGDXV5PmOqM3EeM3oWlrVzwqEVSeShuxUMFwWAQJrBhpJEj1sKRizqvgxRZY3Y rXpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706550921; x=1707155721; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Y3NzQYMmpvHTXBLoyc702W5Zl5rwxoKujBWnFPZfTzI=; b=EIh9XdEGwVYRai3NQuCfnKlG+9hHzRTvXyGCP0dC7XYjWH0p5pH1R65xjNQF54rINo THtmbWgdnE5CzI56UhmsAc3t7p3OutNQvGnmTB9cbRTTCHj/WpQBoxvFz/l3zboJO4Lm P1KGbAlxYxtsqeS34G/5gyHWFOhnZEKpN/z5MP8lEyzlrNPMsOtWmGJ92aqa0iyaSmd/ LYDACXv49HrI82iNW5NI2DckepgTCy7T7sft+JSl5Sd082cfVbGUZJqUlUSwWjtbrmmK x6HAn+SV1ThDePYMLdOlhVr9L8hk6dkHkppxUHmnrN590C3apO8A6hukDfeuO8Kj8z1b JBRw== X-Gm-Message-State: AOJu0Yyrik7CRwW+2VtwTYsEpkDXpCL6yXEBNQ3ri4XsnEEEEZ0fYBxf wXPGzDvfa+JaFUD0UX0laopJDcHytNJdgTUnaMeWBljflarbBMqcETTYlKaVaTJ65A== X-Google-Smtp-Source: AGHT+IHKBzL9b+ijQ48IAq8a2mBY5ABGwlr8jPlejfJOdScwkjPjpxglTY+jaaywvyWnKFqMNfV0Xg== X-Received: by 2002:a05:6808:13cf:b0:3be:5998:2cd4 with SMTP id d15-20020a05680813cf00b003be59982cd4mr2884547oiw.54.1706550921256; Mon, 29 Jan 2024 09:55:21 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id h8-20020aa79f48000000b006ddcadb1e2csm6116676pfr.29.2024.01.29.09.55.17 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Jan 2024 09:55:20 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 5/7] mm/swap: avoid a duplicated swap cache lookup for SWP_SYNCHRONOUS_IO Date: Tue, 30 Jan 2024 01:54:20 +0800 Message-ID: <20240129175423.1987-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129175423.1987-1-ryncsn@gmail.com> References: <20240129175423.1987-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: D471040021 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: tgnzh8abmj5a4mq3dptxz4eiy3ukh9oy X-HE-Tag: 1706550922-392171 X-HE-Meta: U2FsdGVkX1/Nz/MFw6MYpbyu1rp+46p2As7Pt0tu56iqW3rfv6NkvcHWFtUZJvgs5s7mhS0/Bgkl4DhaYM5e5rjrvn/QVjYVqdcIb7S180zogUXI3cm/ieREOkm5wCnVplH1va3CFz6e4CB+Aouxo84ghLxdgOhiXgc67dtCOsYspUcTpE3BFhx9ASoOoZe5BiG3evnpAZ6KyvFE7b7SC0k8761cdkjlysfpV8h66B63SCDzUn+68nd4Ft0lMqWpJSFNj8dOMVDBD86tQ451IH4WPYzgUDzMakNNn6h+ZtW/whiWRUEyepxt9t6oIgd592r7wuCRZpcRBO7hoi8L5r2x8S30vVkspo59jE1FMSf/+e4s07QIMOSlynbJ+SrSRyJljd5/8unMsqufjhPJo35jV4F2+iRzqxq93hqm3cW0DoSIOW8cHxY8xf7iCcrPHd6/u1I1WN030RM701qi7Wv0dujiNcxq5RYgeXNx8wIqGWc87D0qyECC4AfHZEN8v/ZXu2+XdrnxjFht1xGe7livdbQNjwQ/biRSMWPVc12BgllR2qCMOc45CcO14BfovM9A0USuE+tDJ8TL5Ym7BCqR+HrcMIy0MQKxZ4Ujp1jJo/Xq6ESXJdcZYEJVgyCWcqU4KRBB+Q+uC+qcaz2b9KkAVKVzP/nIkSa74vf6aAadoCgQsZDrHwGTr5L//M3tEui2tfoa5jMvK9/z3awdRzZJvZ6vtyEA3NFqPemID3RTkckXgzcJojbKOFzzNgH5cf/Cu0bq8hXxnDiTsNKrXfRgzRgwuz1RBfyp6pb0zO540rI4crN7b/Fu50tZL7+06grTDEIYJU1cIyn3PTzHpzdXHAntOxTPgw62MtX8YiokRf3pg4ItCYJz7REDYCy/GKyz4fpBnuuphS6gyLJwNrHI74zn7k/gJfgFuO/ukCy2i0QUaPPc5Gk1dBunPcwIPTqG6NUunGc4/uopno4 Z6PCiUJ3 cPtZMf0Oo9PRF2R3abfi0kklG2BXkSR8MEu9MWUakbXIaK81HjXL73T5LFMEbjDoEFhPlHtPSWCI1gXBUlcFGT8ZsvddwhIoZozUOQMhthGv9dXsUFAYaI0eYWdUwemuHS+9vguYOBB0+W20R2bQ8FH4/RuHmu6Jb05Ajs6SdGx4YIM/eSYKZ3uIKEh2HVCWhGr2oQjhBPmGWkjo2GFRHYdKF/qAMt5nOt/FLFe+nKEa7wCbKBp05sIxSYJcZi7ZOriaNhr7jO6unfPwWFBihtONHPvK2UeRrvZpNYW9GhaVpD0rCSJUjgDSvXN0f8ZBnGfgp9YnIOaF0TQxIQurwd5AfFRrpyYiXegF/wmkFr72Aol6u3BvLSBiZJG7TvUTlGYTt9N5INTcDjVsVBqR4BQF2MBwuOIxv1XAJZv8kNQ7stPrWqj0BLsM2Ltsv/yiUenCqpj8yVERkr7wYrCvpIpwnPOtUh4GJ/JIqVXoX5tHSRiIEF9NgsnZ3/3p9kkn0x3LdgHYYci2gSOZepD+k7D3woDI4s8x6Udr9mxpSQm5KKn0NvlCoCrKwRcoaj6Ih5ME3UxVdBtn2wgXBAk/YKQBXXucbKbPh0dQusxYGTfVe+n/wBY2Q+wbG+/4K2V+HXgiF2ZaWyteeEEmJwqzn6b5RbkMpvnN8eMHHxEKbyO8I7Zg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song When a xa_value is returned by the cache lookup, keep it to be used later for workingset refault check instead of doing the looking up again in swapin_no_readahead. Shadow look up and workingset check is skipped for swapoff to reduce overhead, workingset checking for anon pages upon swapoff is not helpful, simply consider all pages as inactive make more sense since swapoff doesn't mean pages is being accessed. After this commit, swappin is about 4% faster for ZRAM, micro benchmark result which use madvise to swap out 10G zero-filled data to ZRAM then read them in: Before: 11143285 us After: 10692644 us (+4.1%) Signed-off-by: Kairui Song Reviewed-by: "Huang, Ying" --- mm/memory.c | 5 +++-- mm/shmem.c | 2 +- mm/swap.h | 11 ++++++----- mm/swap_state.c | 23 +++++++++++++---------- mm/swapfile.c | 4 ++-- 5 files changed, 25 insertions(+), 20 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 8711f8a07039..349946899f8d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3800,6 +3800,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE; bool exclusive = false; + void *shadow = NULL; swp_entry_t entry; pte_t pte; vm_fault_t ret = 0; @@ -3858,14 +3859,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!si)) goto out; - folio = swap_cache_get_folio(entry, vma, vmf->address); + folio = swap_cache_get_folio(entry, vma, vmf->address, &shadow); if (folio) page = folio_file_page(folio, swp_offset(entry)); swapcache = folio; if (!folio) { folio = swapin_entry(entry, GFP_HIGHUSER_MOVABLE, - vmf, &swapcache); + vmf, &swapcache, shadow); if (!folio) { /* * Back out if somebody else faulted in this pte diff --git a/mm/shmem.c b/mm/shmem.c index d7c84ff62186..698a31bf7baa 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1873,7 +1873,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, } /* Look it up and read it in.. */ - folio = swap_cache_get_folio(swap, NULL, 0); + folio = swap_cache_get_folio(swap, NULL, 0, NULL); if (!folio) { /* Or update major stats only when swapin succeeds?? */ if (fault_type) { diff --git a/mm/swap.h b/mm/swap.h index 8f8185d3865c..ca9cb472a263 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -42,7 +42,8 @@ void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_area_struct *vma, unsigned long addr); + struct vm_area_struct *vma, unsigned long addr, + void **shadowp); struct folio *filemap_get_incore_folio(struct address_space *mapping, pgoff_t index); @@ -54,8 +55,8 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags, bool skip_if_exists); struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); -struct folio *swapin_entry(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf, struct folio **swapcached); +struct folio *swapin_entry(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf, + struct folio **swapcached, void *shadow); static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -87,7 +88,7 @@ static inline struct folio *swap_cluster_readahead(swp_entry_t entry, } static inline struct folio *swapin_entry(swp_entry_t swp, gfp_t gfp_mask, - struct vm_fault *vmf, struct folio **swapcached) + struct vm_fault *vmf, struct folio **swapcached, void *shadow) { return NULL; } @@ -98,7 +99,7 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc) } static inline struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_area_struct *vma, unsigned long addr) + struct vm_area_struct *vma, unsigned long addr, void **shadowp) { return NULL; } diff --git a/mm/swap_state.c b/mm/swap_state.c index 5e06b2e140d4..e41a137a6123 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -330,12 +330,18 @@ static inline bool swap_use_vma_readahead(void) * Caller must lock the swap device or hold a reference to keep it valid. */ struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_area_struct *vma, unsigned long addr) + struct vm_area_struct *vma, unsigned long addr, void **shadowp) { struct folio *folio; - folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry)); - if (!IS_ERR(folio)) { + folio = filemap_get_entry(swap_address_space(entry), swp_offset(entry)); + if (xa_is_value(folio)) { + if (shadowp) + *shadowp = folio; + return NULL; + } + + if (folio) { bool vma_ra = swap_use_vma_readahead(); bool readahead; @@ -365,8 +371,6 @@ struct folio *swap_cache_get_folio(swp_entry_t entry, if (!vma || !vma_ra) atomic_inc(&swapin_readahead_hits); } - } else { - folio = NULL; } return folio; @@ -866,16 +870,16 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vmf: fault information + * @shadow: workingset shadow corresponding to entry * * Returns the struct folio for entry and addr after the swap entry is read * in. */ static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) + struct vm_fault *vmf, void *shadow) { struct vm_area_struct *vma = vmf->vma; struct folio *folio; - void *shadow = NULL; /* skip swapcache */ folio = vma_alloc_folio(gfp_mask, 0, @@ -892,7 +896,6 @@ static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, } mem_cgroup_swapin_uncharge_swap(entry); - shadow = get_shadow_from_swap_cache(entry); if (shadow) workingset_refault(folio, shadow); @@ -922,7 +925,7 @@ static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, * or skip the readahead(ie, ramdisk based swap device). */ struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf, struct folio **swapcache) + struct vm_fault *vmf, struct folio **swapcache, void *shadow) { struct mempolicy *mpol; struct folio *folio; @@ -930,7 +933,7 @@ struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask, if (data_race(swp_swap_info(entry)->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { - folio = swapin_direct(entry, gfp_mask, vmf); + folio = swapin_direct(entry, gfp_mask, vmf, shadow); } else { mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_vma_readahead()) diff --git a/mm/swapfile.c b/mm/swapfile.c index 1cf7e72e19e3..aac26f5a6cec 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1865,7 +1865,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, pte_unmap(pte); pte = NULL; - folio = swap_cache_get_folio(entry, vma, addr); + folio = swap_cache_get_folio(entry, vma, addr, NULL); if (!folio) { struct vm_fault vmf = { .vma = vma, @@ -1875,7 +1875,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, }; folio = swapin_entry(entry, GFP_HIGHUSER_MOVABLE, - &vmf, NULL); + &vmf, NULL, NULL); } if (!folio) { /* From patchwork Mon Jan 29 17:54:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13536159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D395C48285 for ; Mon, 29 Jan 2024 17:55:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9DCB6B0085; Mon, 29 Jan 2024 12:55:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A31EF6B0087; Mon, 29 Jan 2024 12:55:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8531D6B0088; Mon, 29 Jan 2024 12:55:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 672BA6B0085 for ; Mon, 29 Jan 2024 12:55:28 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 464BB1C0FD8 for ; Mon, 29 Jan 2024 17:55:28 +0000 (UTC) X-FDA: 81733100736.26.AA1D9A3 Received: from mail-oi1-f178.google.com (mail-oi1-f178.google.com [209.85.167.178]) by imf29.hostedemail.com (Postfix) with ESMTP id 6A67912001B for ; Mon, 29 Jan 2024 17:55:26 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UinSVX4K; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.167.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706550926; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vG41zT4S0no8o7fWFQ9pcIk63IVt4FqG1CDDNkW0Zi0=; b=VbD9l1z3hT3ySJ4pDJ7dSgzXgs38Ma/26rMaBOZDlTTUkBKdYj/3JAhBQbgLTE16JMVRV7 vjn36SZLBOdLQI0ZDmXC2gxuPaENO6hDpsycrzRikCVLc4xN7xLbjg5/msMcLZaQmUiInC 46+ae7wOeXngDe53qaBfMEbSFJwWZ7k= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UinSVX4K; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.167.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706550926; a=rsa-sha256; cv=none; b=W4HWeaeVJ4CBZznWb0efBKRXKl1dyzuBwGQjwkaToqJ+l3yXwrxS1ZoIc+pF+MDDsLsaE+ B9zxQ8q+pZ7khFWg1ncYDoCWckORXDnUmG7stQYwFBNLwhAleX+XqrCbBR7bVAYTxtcwfP 3hxeGsE8hOTNRkp3zbQHtmbtGScnHWg= Received: by mail-oi1-f178.google.com with SMTP id 5614622812f47-3be4cf37c14so711417b6e.0 for ; Mon, 29 Jan 2024 09:55:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706550925; x=1707155725; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=vG41zT4S0no8o7fWFQ9pcIk63IVt4FqG1CDDNkW0Zi0=; b=UinSVX4KZSgrODG67qjJkxpE5BiaEXTz5k/cPBT9qkxgD2XTkKMOp1GwoncY9uAfSt PFTpfEttxBFeMe2r1V4ZzmxGfPm/VE30IBFhY8Hk8fca37X8IK6rr+kLGiCwL7aZCCbd DN/7yYACXnip8kN1jL5RF9Z3DJ2hT8tSbPy+ACgH1icdP4f1Kl3vKiu3I2xnp2SNMCXu 56SJrXByKBdw7q850I8/cG60QchWABnwa3o6ipa8skKoQE6FzV/dLnzimPgQzN+Bv+7t NpX9cPBDhaPd8n/M6VB8mtFbw58dYd6SO73Vssnkt8XpoDBIhOdOPx/acZfmQBsNiM+F WwOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706550925; x=1707155725; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=vG41zT4S0no8o7fWFQ9pcIk63IVt4FqG1CDDNkW0Zi0=; b=L6dYYsXudc7yPzvss6F1/Qi4reelM8grr9m7R8M/mVFoxrejfL8ig+gfWDsIJkz0ZB rG2eM8UxT0S1nk6yDffuoemDjJrDvWKNXn+S6X+4F+K5TDb6FQGUhBFioRl8/FTkuRwR X9JQxr/2SrRpAMPRn/uFjJ/2HEZj9u9h3kRNB1f1c9rlEa3TBCjFkJZaX9H47S2nLq7N 7QOY4b2klmsGmjDVKGQBvGRHerntSmnNzW1Wfo2yHOjIvRnpFU+od3ajEfNmshHndgEF JJ1TiD0MpGjHTQu11ig9Q00Czk6wCA0otSMFAKiZpCFuguC1IRj9prE1z9beMOjI0v0D chGg== X-Gm-Message-State: AOJu0Yw5XQI3QvqNKfo5bX5RVSt7R+fIvLhKkmcFxBYKy/8VOvI0JpM7 MAHEMQjuAy3x4nmtoBse9tSoXVnBJn4V1Wk2pCDH2xbJCru9ow2YWcEi0zOhwdMGYA== X-Google-Smtp-Source: AGHT+IESw8A9HzVVYux2+WCOfmmBri6bmvTIsnqEfHR4nt5s9rQPNkemhyM/5eb8SN0HT9K9hl0z2w== X-Received: by 2002:a05:6808:23ca:b0:3be:453d:e061 with SMTP id bq10-20020a05680823ca00b003be453de061mr4478694oib.6.1706550924927; Mon, 29 Jan 2024 09:55:24 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id h8-20020aa79f48000000b006ddcadb1e2csm6116676pfr.29.2024.01.29.09.55.21 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Jan 2024 09:55:24 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 6/7] mm/swap, shmem: use unified swapin helper for shmem Date: Tue, 30 Jan 2024 01:54:21 +0800 Message-ID: <20240129175423.1987-7-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129175423.1987-1-ryncsn@gmail.com> References: <20240129175423.1987-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6A67912001B X-Stat-Signature: ayjp1486cu9u5xbr1eg3ghwwgrntf14w X-HE-Tag: 1706550926-179084 X-HE-Meta: U2FsdGVkX1/0KeOfiVL6lDtFdYilqPCiP+Ua4t+a2Cg/nMiunrY+7z+HoP9/k2XZiV+fYUbgq85h1PggvQ99eQc2QETnXULNpEFSsBF20zovUKwacD3VJ9UKhDEQN93JhjZrONtr27CzTAaFlx3X1rbX+nrhjfVG1NNRtram3ZoBrI6vxKmNpFcPt27vRcwqBKshvindf7DXPYd6h+WDHdiI47+x64AqjxG3ICzSXg79p58AzL6poQxuAJDo50UkEkwkmsovn5CweH03Wu+iod5UF2jBJS+n92C2V8tN0MD1dOiWqhr4lLrg152pXzaXedzxYq7f5RmeDJdlajmZN+O3GWZDv1EPb8ARtsS5w2nIhGaRtppHsFHgFscE2hENKNHhQNzivNKFVRleG+lv2XHTQS5c3kvxkjSpsswPzzGEQRe0WY6DG/2VbZLWt7o8nhptpky2yi7y/PvSPHAvk4zJwPf1MNx8relDAgCEmEEF7PM5NA6DbIG3Jg6lNCUThy6j102rPfDcmjaOT//sEbHR3M9wtS4Qg2XgAgNoz5Pal/uuqg+xTeZawpyWNcY2t0VIZF008k/EvhntbdY7Gqlsm6f/u712scezGCZaW1PFzmVIvEhUNoIW7C/CeTZjGKDe31tBlO+zv3wLKiXdb7q9QvH42d024MQM5okGY4Hgs3OKSNDkLUAgtJ4Ba/WiN77iOuVFpoaW96nbTA3sfHfzhQXDxlix6ydZdsR9WD+eXQBukia75Lkvi4JtQe2qhNo9/WW/V3/sQ+EH4HDE2cp+nA2wBOscG4636RHBzaVbdAb3JEiklvkEYqs7iYDQvL+SkJ1Ndm0+sUVKT9hosdF/L54Uzc3Gbk9NTuCE9CE9w7BWC7UHSalWGXK49BblE+JwPBrbssN1bAeaQoqg16VVWaxt8VfduE9bT7oDnAfUA9TnmDViQxikVAlwsKxqyawcq3L/6esvSX7GtBr iwbpqy+Q rC0gsjGpLLwDQDDjB16gZ33bC2AWTrtJl9AA9TGDhbOJvf/vIJW9qtssuA+z6KeBjRZ9z0M8zTO4EeW37vF/PX6DbrxAVENXL3wsYYUtAnx2tHV/Dw14sGtuDv2JpQUO7b/glBFbUDVp5yVOPsq0E1u6JIygJ0KFoG1vH/2fKNsaQXENCCbKkN2Zr3Nw9jol6n/Rosv/tJlDV6fls9CX0ODwthp4psPrv55CY1IMa/WK/BbufRTjnycZSbss7ZJh1zFPDJXoSm9djw8vlIBhFz6BaKpdJfyVRjOPopGH0m8hYajIan4d4+d5KNalAO0qhwsgg8ktMIxfiABXsxtOfkHw059Lj2TeIWrzL+nCprZ+LFqH32BAnii9W58UxABytDLl4sJJSMa+GULIVDGAkPf3tk4QWdqc8whyO5BrJjP6RCr3mIVRhfy4xF9pSJ7nOHKSi/2/6gQ8ieLy+/xEdPanDNN9K5lMwV2fyUo3dHEYCE7wUAwxWuHXSw6gfWX9su3AI/uEXbIgsMBNlqj8fRS/pIa30lKrvWlb0TYMhgCP8QHgjuVWXavtfFqRQL1XOI6PtCvatIvb2cL2BmYiB7p+M0fYzI3LWJ0BgXnLtFwMBgilziXlg8O93ISPHzZOfmyBK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Currently, shmem uses cluster readahead for all swap backends. Cluster readahead is not a good solution for ramdisk based device (ZRAM), and it's better to skip it. After switching to the new helper, most benchmarks showed a good result: - Single file sequence read on ramdisk: perf stat --repeat 20 dd if=/tmpfs/test of=/dev/null bs=1M count=8192 (/tmpfs/test is a zero filled file, using brd as swap, 4G memcg limit) Before: 22.248 +- 0.549 After: 22.021 +- 0.684 (-1.1%) - shmem FIO test 1 on a Ryzen 5900HX: fio -name=tmpfs --numjobs=16 --directory=/tmpfs --size=960m \ --ioengine=mmap --rw=randread --random_distribution=zipf:0.5 \ --time_based --ramp_time=1m --runtime=5m --group_reporting (using brd as swap, 2G memcg limit) Before: bw ( MiB/s): min= 1167, max= 1732, per=100.00%, avg=1460.82, stdev= 4.38, samples=9536 iops : min=298938, max=443557, avg=373964.41, stdev=1121.27, samples=9536 After (+3.5%): bw ( MiB/s): min= 1285, max= 1738, per=100.00%, avg=1512.88, stdev= 4.34, samples=9456 iops : min=328957, max=445105, avg=387294.21, stdev=1111.15, samples=9456 - shmem FIO test 2 on a Ryzen 5900HX: fio -name=tmpfs --numjobs=16 --directory=/tmpfs --size=960m \ --ioengine=mmap --rw=randread --random_distribution=zipf:1.2 \ --time_based --ramp_time=1m --runtime=5m --group_reporting (using brd as swap, 2G memcg limit) Before: bw ( MiB/s): min= 5296, max= 7112, per=100.00%, avg=6131.93, stdev=17.09, samples=9536 iops : min=1355934, max=1820833, avg=1569769.11, stdev=4375.93, samples=9536 After (+3.1%): bw ( MiB/s): min= 5466, max= 7173, per=100.00%, avg=6324.51, stdev=16.66, samples=9521 iops : min=1399355, max=1836435, avg=1619068.90, stdev=4263.94, samples=9521 So cluster readahead doesn't help much even for single sequence read, and for random stress tests, the performance is better without it. Considering both memory and swap devices will get more fragmented slowly, and commonly used ZRAM consumes much more CPU than plain ramdisk, false readahead could occur more frequently and waste more CPU. Direct SWAP is cheaper, so use the new helper and skip read ahead for SWP_SYNCHRONOUS_IO device. Signed-off-by: Kairui Song --- mm/memory.c | 2 +- mm/shmem.c | 50 +++++++++++++++++++++++++++++++---------------- mm/swap.h | 14 ++++--------- mm/swap_state.c | 52 +++++++++++++++++++++++++++++++++---------------- mm/swapfile.c | 2 +- 5 files changed, 74 insertions(+), 46 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 349946899f8d..51962126a79c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3866,7 +3866,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { folio = swapin_entry(entry, GFP_HIGHUSER_MOVABLE, - vmf, &swapcache, shadow); + vmf, NULL, 0, &swapcache, shadow); if (!folio) { /* * Back out if somebody else faulted in this pte diff --git a/mm/shmem.c b/mm/shmem.c index 698a31bf7baa..d3722e25cb32 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1565,15 +1565,16 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo) static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info, pgoff_t index, unsigned int order, pgoff_t *ilx); -static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) +static struct folio *shmem_swapin(swp_entry_t swap, gfp_t gfp, + struct shmem_inode_info *info, pgoff_t index, + struct folio **swapcache, void *shadow) { struct mempolicy *mpol; pgoff_t ilx; struct folio *folio; mpol = shmem_get_pgoff_policy(info, index, 0, &ilx); - folio = swap_cluster_readahead(swap, gfp, mpol, ilx); + folio = swapin_entry(swap, gfp, NULL, mpol, ilx, swapcache, shadow); mpol_cond_put(mpol); return folio; @@ -1852,8 +1853,9 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, { struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); + struct folio *swapcache = NULL, *folio; struct swap_info_struct *si; - struct folio *folio = NULL; + void *shadow = NULL; swp_entry_t swap; int error; @@ -1873,8 +1875,10 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, } /* Look it up and read it in.. */ - folio = swap_cache_get_folio(swap, NULL, 0, NULL); - if (!folio) { + folio = swap_cache_get_folio(swap, NULL, 0, &shadow); + if (folio) { + swapcache = folio; + } else { /* Or update major stats only when swapin succeeds?? */ if (fault_type) { *fault_type |= VM_FAULT_MAJOR; @@ -1882,7 +1886,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, count_memcg_event_mm(fault_mm, PGMAJFAULT); } /* Here we actually start the io */ - folio = shmem_swapin_cluster(swap, gfp, info, index); + folio = shmem_swapin(swap, gfp, info, index, &swapcache, shadow); if (!folio) { error = -ENOMEM; goto failed; @@ -1891,17 +1895,21 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* We have to do this with folio locked to prevent races */ folio_lock(folio); - if (!folio_test_swapcache(folio) || - folio->swap.val != swap.val || - !shmem_confirm_swap(mapping, index, swap)) { + if (swapcache) { + if (!folio_test_swapcache(folio) || folio->swap.val != swap.val) { + error = -EEXIST; + goto unlock; + } + if (!folio_test_uptodate(folio)) { + error = -EIO; + goto failed; + } + folio_wait_writeback(folio); + } + if (!shmem_confirm_swap(mapping, index, swap)) { error = -EEXIST; goto unlock; } - if (!folio_test_uptodate(folio)) { - error = -EIO; - goto failed; - } - folio_wait_writeback(folio); /* * Some architectures may have to restore extra metadata to the @@ -1909,12 +1917,19 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, */ arch_swap_restore(swap, folio); - if (shmem_should_replace_folio(folio, gfp)) { + /* If swapcache is bypassed, folio is newly allocated respects gfp flags */ + if (swapcache && shmem_should_replace_folio(folio, gfp)) { error = shmem_replace_folio(&folio, gfp, info, index); if (error) goto failed; } + /* + * The expected value checking below should be enough to ensure + * only one up-to-date swapin success. swap_free() is called after + * this, so the entry can't be reused. As long as the mapping still + * has the old entry value, it's never swapped in or modified. + */ error = shmem_add_to_page_cache(folio, mapping, index, swp_to_radix_entry(swap), gfp); if (error) @@ -1925,7 +1940,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) folio_mark_accessed(folio); - delete_from_swap_cache(folio); + if (swapcache) + delete_from_swap_cache(folio); folio_mark_dirty(folio); swap_free(swap); put_swap_device(si); diff --git a/mm/swap.h b/mm/swap.h index ca9cb472a263..597a56c7fb02 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -53,10 +53,9 @@ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags, struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, bool skip_if_exists); -struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, - struct mempolicy *mpol, pgoff_t ilx); struct folio *swapin_entry(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf, - struct folio **swapcached, void *shadow); + struct mempolicy *mpol, pgoff_t ilx, + struct folio **swapcache, void *shadow); static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -81,14 +80,9 @@ static inline void show_swap_cache_info(void) { } -static inline struct folio *swap_cluster_readahead(swp_entry_t entry, - gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx) -{ - return NULL; -} - static inline struct folio *swapin_entry(swp_entry_t swp, gfp_t gfp_mask, - struct vm_fault *vmf, struct folio **swapcached, void *shadow) + struct vm_fault *vmf, struct mempolicy *mpol, pgoff_t ilx, + struct folio *swapcache, void *shadow); { return NULL; } diff --git a/mm/swap_state.c b/mm/swap_state.c index e41a137a6123..20c206149be4 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -316,6 +316,18 @@ void free_pages_and_swap_cache(struct encoded_page **pages, int nr) release_pages(pages, nr); } +static inline bool swap_use_no_readahead(struct swap_info_struct *si, swp_entry_t entry) +{ + int count; + + if (!data_race(si->flags & SWP_SYNCHRONOUS_IO)) + return false; + + count = __swap_count(entry); + + return (count == 1 || count == SWAP_MAP_SHMEM); +} + static inline bool swap_use_vma_readahead(void) { return READ_ONCE(enable_vma_readahead) && !atomic_read(&nr_rotate_swap); @@ -635,8 +647,8 @@ static unsigned long swapin_nr_pages(unsigned long offset) * are used for every page of the readahead: neighbouring pages on swap * are fairly likely to have been swapped out from the same node. */ -struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct mempolicy *mpol, pgoff_t ilx) +static struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx) { struct folio *folio; unsigned long entry_offset = swp_offset(entry); @@ -876,14 +888,13 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, * in. */ static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf, void *shadow) + struct mempolicy *mpol, pgoff_t ilx, + void *shadow) { - struct vm_area_struct *vma = vmf->vma; struct folio *folio; - /* skip swapcache */ - folio = vma_alloc_folio(gfp_mask, 0, - vma, vmf->address, false); + folio = (struct folio *)alloc_pages_mpol(gfp_mask, 0, + mpol, ilx, numa_node_id()); if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); @@ -916,6 +927,10 @@ static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, * @gfp_mask: memory allocation flags * @vmf: fault information * @swapcache: set to the swapcache folio if swapcache is used + * @mpol: NUMA memory alloc policy to be applied, + * not needed if vmf is not NULL + * @targ_ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE, + * not needed if vmf is not NULL * * Returns the struct page for entry and addr, after queueing swapin. * @@ -924,26 +939,29 @@ static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, * or vma-based(ie, virtual address based on faulty address) readahead, * or skip the readahead(ie, ramdisk based swap device). */ -struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf, struct folio **swapcache, void *shadow) +struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf, + struct mempolicy *mpol, pgoff_t ilx, + struct folio **swapcache, void *shadow) { - struct mempolicy *mpol; + bool mpol_put = false; struct folio *folio; - pgoff_t ilx; - if (data_race(swp_swap_info(entry)->flags & SWP_SYNCHRONOUS_IO) && - __swap_count(entry) == 1) { - folio = swapin_direct(entry, gfp_mask, vmf, shadow); - } else { + if (!mpol) { mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - if (swap_use_vma_readahead()) + mpol_put = true; + } + if (swap_use_no_readahead(swp_swap_info(entry), entry)) { + folio = swapin_direct(entry, gfp_mask, mpol, ilx, shadow); + } else { + if (vmf && swap_use_vma_readahead()) folio = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); else folio = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); - mpol_cond_put(mpol); if (swapcache) *swapcache = folio; } + if (mpol_put) + mpol_cond_put(mpol); return folio; } diff --git a/mm/swapfile.c b/mm/swapfile.c index aac26f5a6cec..7ff05aaf6925 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1875,7 +1875,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, }; folio = swapin_entry(entry, GFP_HIGHUSER_MOVABLE, - &vmf, NULL, NULL); + &vmf, NULL, 0, NULL, NULL); } if (!folio) { /* From patchwork Mon Jan 29 17:54:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13536160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3769EC48285 for ; Mon, 29 Jan 2024 17:55:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B5B46B0089; Mon, 29 Jan 2024 12:55:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 966336B008C; Mon, 29 Jan 2024 12:55:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DF056B008A; Mon, 29 Jan 2024 12:55:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 699E66B0087 for ; Mon, 29 Jan 2024 12:55:32 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 392181402EE for ; Mon, 29 Jan 2024 17:55:32 +0000 (UTC) X-FDA: 81733100904.23.F41F5DF Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf02.hostedemail.com (Postfix) with ESMTP id 4D8B98001A for ; Mon, 29 Jan 2024 17:55:30 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KwBTVHiE; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706550930; a=rsa-sha256; cv=none; b=IbwRF2paQzXT/k+cfxLePsLQJdAw0Y6yTD7WUyPC90wtca+ZJGiYXiq5o3+G70PDgNh5p5 pHpo7MWiZ2OqRNIPsn5BmyReixnvTTHeX3Im8b+beRox9ZULurx7BYjV+Npavyk/A55ONW 8iKYMcnOnt6kXtbIEYBZI08YOQV/BiY= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KwBTVHiE; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706550930; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GjZwqrF+fUuAybg+YCS1CGFEH6Sivr26PLz3uMvjpFw=; b=XPBgDbqj7Eds1BN3LPBVXHpXoA+JZXz3EOaQz7rvAwBhaKSO+tM4nYdsd9GmaoGvUauB3o qrfGWswGq5SbjsnICj3D4jI2H9Nfds1UrdCAUOD3k9PMGefygtBU8wKdC8idsNbMkfor3Y KJvgfw9WmbI+qr+mcCDf6wBPpWdWXrE= Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6de24201aa6so811331b3a.2 for ; Mon, 29 Jan 2024 09:55:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706550928; x=1707155728; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=GjZwqrF+fUuAybg+YCS1CGFEH6Sivr26PLz3uMvjpFw=; b=KwBTVHiEZO1o+ntOJ4T5YYzY3sGLuxslUDnB79VzGSFfapMe9OrvloxnX48MWREB5x J6GFWIKH9PpGDbHLBt2L/LU2a6anw8U67QSkHWLEuKtQPtlZrbocjP4x+ppnzNAmSJhL E2la8sq8Z4eafomD8E+qdDnbSVdjTIxGEfrnr7uJtny0Qf4C1vMKx5LiftpLUPIdrfFM UTjAc2C82AZFxlaid5+dUiEKUtlNjyWLxAPWZiZ2NRIk9vjYydrEfuz7nchQ+jOR3BOt YjbJYb9hxhiRXSm8gYsyFmy6ss0G5du6ZHNKUn/GTj1y4kNbDpZWsnw29qe1jsQT5ael Pzng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706550928; x=1707155728; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=GjZwqrF+fUuAybg+YCS1CGFEH6Sivr26PLz3uMvjpFw=; b=oqMS9mXtUe7MLRYo5ToCnpXTJvHDPjDVfgNjOifFGOvFt5pJr/vgrbX+AapZ4f2Ebr PVeyoqj4Y55B3pb1uO64Bmtq+PXdgWOcwNfSmlnKvKBCx3fDT04+B0Bm67p3C8hzluPt qf+vbe75GqnKRh3r+y8PnqshNb8ReTbXgA7qyf/DKAnvNyoXASjIBkr47ayRJW1iPb8b 4/dBYPPjOJNg86uoHT4CUkHtw+zWMDed3q0cI2uD0Uth+hAecxdqnXZIeu3xK604lDGW 1M/LX8MJsUHnG9tbqHD1hn6+WL3wGs90MqP41lhRSCt35wAeA87LHxcKTTFa49F5NEc1 1fdg== X-Gm-Message-State: AOJu0YyhCSYAxuq8zhHyvIZHigON1pntliZBTudurbMF4evbhYBy3n3S 511ih0c/68yo7bCrHoY6T/SJFYfc1serCFVJwNlEL8ou9nmO1HHe/gBBwku+Udy+WQ== X-Google-Smtp-Source: AGHT+IFDh6I0Kp5R4i/kVA2C7S+t5BD4PS3SEfvTVBoyIvP27FzIislYT+FaobErhwcJx+cp8D2QWA== X-Received: by 2002:a05:6a00:4fc4:b0:6dd:a24b:b5f3 with SMTP id le4-20020a056a004fc400b006dda24bb5f3mr5025963pfb.12.1706550928614; Mon, 29 Jan 2024 09:55:28 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id h8-20020aa79f48000000b006ddcadb1e2csm6116676pfr.29.2024.01.29.09.55.25 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Jan 2024 09:55:28 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 7/7] mm/swap: refactor swap_cache_get_folio Date: Tue, 30 Jan 2024 01:54:22 +0800 Message-ID: <20240129175423.1987-8-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129175423.1987-1-ryncsn@gmail.com> References: <20240129175423.1987-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4D8B98001A X-Stat-Signature: oossb7kdgmftimfkr8sacquyj7tgdoxo X-HE-Tag: 1706550930-899196 X-HE-Meta: U2FsdGVkX1/igfGVMao6wrf6iAKJk1MopvAdUc3xQdRWJI29FEqWZ8rV9/kQ7ft7dhKz8X2gi0w7nsFFssDtomuUtZjSwwUuonLpSKHP9pI7xS023hBeko/uJoXIrRyOt+A/U/ax2U1uQZSUelgItHzraDK95eZZHW7hK8t/kXaO4DIA/ZuCib1eAZ8mhBbx/oQ+wJuKAVB96nocHJujhtaFQ7jt/GS2ZcBcJuTX0iaSBux8ICb8toeFDuTxYQtFhxvevbeKyrns95HVYKVWGLCuY3Wmt/4PKnxBkwdk0kNFPoGzyhzFcRT68+A8OkKSz1qN31kvUnk7cBP+3OuEhra/u3ai/YPDPhrYof3yPGrV0/OL3HxUDR1i56A40XhRfo4ZW2DyXgt7+V74yOAO4CDzMcjIdza/YW8kozhchMJdwrMZLLQhUfox9XotRzhsV+NSaxhwx0q4rMgqd7fad1CaQBuszuNDHQwEwnDilxpzFs48rtxZrl+hGm7wRyjcTPvdDpfWW5mTymL3ofVwJqE8omYMp60/TnRYmA/ZrSucqwmLxuMv/XRzZWu11TLyAzMFMgXbSe31Wl63+0jtWvOf5vluqzk0uz/KoWJHgqMm3sEO7YIMLjQjpojgqQHDd8grIuEDn3CMsTQYlIIwtcc6koeiU2/8k1A0cOthn/8NWLI3m7fHBrgxemjpj0rtDYkaHaH+cLseN8T4OQgXM5HTCvU1p5vLyOFMWon2TjQxLBcbfhnNSpoCfmGPtLkbcGIHP1CKTEI0sivntxqbVox0mQrGG7UIvL5IckRti4VxXczRcULGecReSepTcpq/R2np5m0T354jYG7zuku3aBnlc5wGym/hxfsXoY2F95hqp/4Q9Ts5MoMo+odZ6an+noqgP5YmohSPxDpPOdv1oDw1R/Q0lWBOg1W0RpA+buoXs0Z1CjQYX93Wy1jR7Vz/NTs9yiR/B66hm4GgLo5 1rl06FN8 KuR2nhzSW9kQZVLq+EDD6JXq2qNlUxshsFIW3pLpLy02yxwFXl52etBg44WVWHDSQRlQhoIozwyLTQ+lkj0fCuFU6MaJ6ydz9FIZN5WDkCDanL8e20oH+ItPj7waFDak/n1tnnWkxlSADGkHr5ibN5QFEDfscA7v0Hq7fhj8ZvhlRVgkk8PbmOHjXgOr4WXPpOvZNFud5YzC6nERucBzi4xDM1Zso6WHRV72oG8TzVh+CCK+VyOvsfA75xFkiXcpQNI9tzBzUMOqNjdd+44XYYboTJCjSLyLZRXIk+MMtZhE3ngCYKLKTR6N+dYB5vGOO6/5NgruP5UMIYL6r6DifYlnS/cBNmmHkyoOG2vNLjs4QDG3RLzBiYrW4+k/jAzwgW+95dYFn8wJL5rLTROp7fzA4QwbGMjNA9s/JuCm/12Pn0oOTPFI20y/b52E2zNA8p8OrN5Ml+gyd4HVO2DTyKAtnLyZn2ZFHr04xZl5/pq8nABmI9W+oQzByYA0lIFY8i6k9hsyuGCdql601JORqOrO1QkMbqccH/fSGTUD8SXpqN+SIsRUp1ymg7Zj0D5KrcW+CwzGO17frcP3l9HYlAW43BVI0L0tR4BYwa9gpEoEzLnDx5oBjvm+IjeXlHgFD1QWgZM2D4yYHgLerl7LzcCsZu3SJoPkCMvigmWO9oAY8CBo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song No feature change, change the code layout to reduce object size and kill redundant indent. With gcc 13.2.1: ./scripts/bloat-o-meter mm/swap_state.o.old mm/swap_state.o add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-35 (-35) Function old new delta swap_cache_get_folio 380 345 -35 Total: Before=8785, After=8750, chg -0.40% Signed-off-by: Kairui Song --- mm/swap_state.c | 59 ++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 30 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 20c206149be4..2f809b69b65a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -341,9 +341,10 @@ static inline bool swap_use_vma_readahead(void) * * Caller must lock the swap device or hold a reference to keep it valid. */ -struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_area_struct *vma, unsigned long addr, void **shadowp) +struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, + unsigned long addr, void **shadowp) { + bool vma_ra, readahead; struct folio *folio; folio = filemap_get_entry(swap_address_space(entry), swp_offset(entry)); @@ -352,37 +353,35 @@ struct folio *swap_cache_get_folio(swp_entry_t entry, *shadowp = folio; return NULL; } + if (!folio) + return NULL; - if (folio) { - bool vma_ra = swap_use_vma_readahead(); - bool readahead; + /* + * At the moment, we don't support PG_readahead for anon THP + * so let's bail out rather than confusing the readahead stat. + */ + if (unlikely(folio_test_large(folio))) + return folio; - /* - * At the moment, we don't support PG_readahead for anon THP - * so let's bail out rather than confusing the readahead stat. - */ - if (unlikely(folio_test_large(folio))) - return folio; - - readahead = folio_test_clear_readahead(folio); - if (vma && vma_ra) { - unsigned long ra_val; - int win, hits; - - ra_val = GET_SWAP_RA_VAL(vma); - win = SWAP_RA_WIN(ra_val); - hits = SWAP_RA_HITS(ra_val); - if (readahead) - hits = min_t(int, hits + 1, SWAP_RA_HITS_MAX); - atomic_long_set(&vma->swap_readahead_info, - SWAP_RA_VAL(addr, win, hits)); - } + vma_ra = swap_use_vma_readahead(); + readahead = folio_test_clear_readahead(folio); + if (vma && vma_ra) { + unsigned long ra_val; + int win, hits; + + ra_val = GET_SWAP_RA_VAL(vma); + win = SWAP_RA_WIN(ra_val); + hits = SWAP_RA_HITS(ra_val); + if (readahead) + hits = min_t(int, hits + 1, SWAP_RA_HITS_MAX); + atomic_long_set(&vma->swap_readahead_info, + SWAP_RA_VAL(addr, win, hits)); + } - if (readahead) { - count_vm_event(SWAP_RA_HIT); - if (!vma || !vma_ra) - atomic_inc(&swapin_readahead_hits); - } + if (readahead) { + count_vm_event(SWAP_RA_HIT); + if (!vma || !vma_ra) + atomic_inc(&swapin_readahead_hits); } return folio;