From patchwork Mon Aug 21 18:33:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F833EE49A6 for ; Mon, 21 Aug 2023 18:37:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B98788E0014; Mon, 21 Aug 2023 14:37:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AD15A8E0002; Mon, 21 Aug 2023 14:37:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9988D8E0014; Mon, 21 Aug 2023 14:37:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8CE088E0002 for ; Mon, 21 Aug 2023 14:37:46 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 57C2112025E for ; Mon, 21 Aug 2023 18:37:46 +0000 (UTC) X-FDA: 81148970532.06.02B9311 Received: from mail-oi1-f170.google.com (mail-oi1-f170.google.com [209.85.167.170]) by imf20.hostedemail.com (Postfix) with ESMTP id 957531C001A for ; Mon, 21 Aug 2023 18:37:44 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=bs78ZLqF; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf20.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.167.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643064; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3E/b9O2HlTOi/IQwrvX2RmX6ugx51n+ANVahrlt1y9E=; b=RR3DyafEvrnGeD5B2XpeYFa3nXmVZEX/B/XPhuBMV0EK9/Z4RuzxA3l83c6pu+EeMFvN+/ QrSXv20stgIiypqcpQ/sRbxsaIKoAahuZySZG2tJhyHVTgA9/Hts8zvsiNFuBD3iICDbQT KQvy5oSF+4doT80pT2Ywt4cUh9c0we0= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=bs78ZLqF; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf20.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.167.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643064; a=rsa-sha256; cv=none; b=cimPR6n70ckW2uUXGrXAijZAPUTqXRVF66XjJ9SSaqtYsWah3MFPxpyXDxBoCiaCY+Mo8C DQVM470Pl2zLV1+7zN30j0g7SYSaCE/BeD8rGDx9ok+N8SYxxk4A+77mVFuJk7DFO0cGqx AbwuQm4P73cSKnMFaYROS8Sz5YPZjdg= Received: by mail-oi1-f170.google.com with SMTP id 5614622812f47-3a850f07fadso1383538b6e.2 for ; Mon, 21 Aug 2023 11:37:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643063; x=1693247863; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3E/b9O2HlTOi/IQwrvX2RmX6ugx51n+ANVahrlt1y9E=; b=bs78ZLqFzdCBNP5WOZIQbmPxf0MB2IdxbI0+wTHjK3eA21DMWZU3ExKk5aSnGUtSGv VxXOxMtw21FDXY9J49x9bWd5kOiUDCL8lWn93rQTeUBXsM20t9bNh7PH/yUTZL5+Chzj r2jw52FRRe82audJSSNDAz3Dir5MIn2g0TgWAgipTEF9Pte+5X+Fm3UfzQo1qgUAV9i4 Vquc0HF8tqg2Xl3K08/Bc4G/ic+rkduZ24lP/J7PZMU1qeh2WaPyWWJMLjcOglEig1p5 mp+7h9SeFU5gRyklMRXybBl3jNbOKpbAw+pKb/mvWIH0jGBjP4LK4u0bxrbPXxhKO8aB h4KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643063; x=1693247863; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3E/b9O2HlTOi/IQwrvX2RmX6ugx51n+ANVahrlt1y9E=; b=MJYb/2xKUsDxf21rymiuFhWXOiBhX0wNx0Dlmlm/5dbTug3cC33xRpO6DjDEvn9O3F 3hSStvYnyeUAchP87dmdzWE3w6J4Tc23TCs0ecUWfLLPb3Sm4jttmRhriHTOgAJuQ5nL 9i14e5b18W0VEqkXS2U78DD6lQE3MtktnwGSTGlvRCCrEVLwbenEameTtwHV42TuDZr1 5D+fqX/vqK4csM1ztHOWvZesKNzZCyQWMJqlqaZJtIFWbxx0r/WbReI3D0/TmTwe2d2+ 2IOUXHBsMoM1nfLSW4z8pMFjfqk3tdkkrJzpx4PgYLxB82ZJxDOUC2ILWXg/dSP3BB9O WxTw== X-Gm-Message-State: AOJu0Ywx98qYMtu1W0egXOCL/OtrTTmAixRfqVj+dlk6IRn9V3+C8xFx NSNaa4Hrdsw4EcEQAJCwxybP6w== X-Google-Smtp-Source: AGHT+IGBSQnPB0tu7RVp53PncovhzE7SzTwHvY1Hwz+2KHc+6UHEFujdqz0TZEhwDQ+r7m42MeMWvg== X-Received: by 2002:a05:6358:90e:b0:130:df70:b9cd with SMTP id r14-20020a056358090e00b00130df70b9cdmr5101720rwi.12.1692643063580; Mon, 21 Aug 2023 11:37:43 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id m18-20020ae9e712000000b0076d4bb714afsm2634476qka.50.2023.08.21.11.37.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:43 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/8] mm: page_alloc: use get_pfnblock_migratetype where pfn available Date: Mon, 21 Aug 2023 14:33:33 -0400 Message-ID: <20230821183733.106619-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 957531C001A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: fwg6rxjzirdgbcj8q9xh54fd1y747wbq X-HE-Tag: 1692643064-68425 X-HE-Meta: U2FsdGVkX1+fgbjoWOLy7SLR3N1DCeoCih+4PI4Hqr0BcDG3isTYMlkDUAuKxwmudqGWvwhnRxcaMpUHc5pllDZwxN2oSmHLxPY/ePmIOPtlyi7XpcRuvL1KLwiRLyGULPVTWanZIhN18+8mgvP9tW5EEk+KpdV4ZWwnq5mBst6cJKn3dPfGE2NLluJUfxcOyMHQgUy3kdGPVKG3cTe4Saya3ARD58RH9yHvmJ3FoTukmJSS0d17+7mYWFXtDMDPdP8GnLhL4ZfEtppQvw6aLuk3OMkbBS7g3eGR05/9rSwrTBneiPbge76mnPbwNR51PjbQW1qHuiiQk10v6jh8DNQqZ8HiHLUM8z5qjpgAeMycsbUdzfg8icglC+FcIHzEGtjS2Zoac7II9mBCDSIv6PdhVIF1J58k9GSdGgbrEHqfs7nKotgc3yuq3yp0dhNUhz2f1uJ5UlN1bMDsMqR4P55qxKkxQ9ChyZkO+8uNgQq/hvMbJN0HwzkItAslvj5/vQttvhNKyy1Lrb4/9y30v+izNicRY7hP0qCPUlR/izSLessCEaAK2/YdChf/L8dhVMK4JDAip6/y0CZBeaOJn7qjMOftrkF1WlZ6c8rNgL7hOCk0yc1rlD/tJxal4gXmVx9L7mE6lU/Aqs2XPH9Rr+RHH3n9QQs63EJ9tH7Qk7mSsnEa8Ps51871cUeus8oY+EqCggoSRo3TrH4La7RHyaeLuXgNf7JU6bGphFpVyYvkQ9s/TRZJiS2BXnRaDP365q/bTmwz7vc0GIXfOC3/ZJ/ldt0qj7/+d2d1LfrniY7brYb/hx3JTOYjQih/Kn7avj95+SLRTQp+mSwOC/WpIYZUBgHRMqbTPxghEVVbJyiq2r6lnOS5G6Los9ZALPZpd3qyp8vfz1/U5k/bj5A0+Qgsq6JyEjRnEFuxsjwe+DlyqgRWoHlnFjn6sQF+TgwEirAP4k6rrkJPFsJw0UB nrzunKPQ l6mOPHYLMMfuKMRCLlW8Uc4yGN6rjsQlm1dFhnrkpy5cJJ6hA6ye9iux+74JWWckslLA/9a/T9IYX7ju/sq/ppRb2luk4pWGLADVQ3vEckABNZNO9j5FO2uvpytrkp3ZK28tCEfzGSfQBr0MxnqUG69asKfA/pbDbTdiB1xV8roXif/JVjSKiAK/Km9JxMbUKmUJKkm+mvmu9qcBhG8K3LbZ0pg/1W5PqlL8IK7PyX8bJnBrI92upQ2CLaDsK8rJfqpVRWMb1vxxeq4chxuJhwM1CwlcN5QzOSHW+CufQDSb04tyP3e7VOR2m8lf6gBWlIkav62k+Bd5KQVDCxq4Kj0aVdyi7om54WsuKDCXyOzaKrDyMCjqWT9PTXI8rfLSXFNCnEnX4moUs8VovoVtEWuWTKlUgGPuLWc3pVBeKLUnOLJ6WzGCRBEeNU79F+VNTOH/y/U67Po2xdsZom3qL4vrJJxP6BGYg04GFQJLEcvJoKgU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Save a pfn_to_page() lookup when the pfn is right there already. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 977bb4d5e8e1..e430ac45df7c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -824,7 +824,7 @@ static inline void __free_one_page(struct page *page, * pageblock isolation could cause incorrect freepage or CMA * accounting or HIGHATOMIC accounting. */ - int buddy_mt = get_pageblock_migratetype(buddy); + int buddy_mt = get_pfnblock_migratetype(buddy, buddy_pfn); if (migratetype != buddy_mt && (!migratetype_is_mergeable(migratetype) || @@ -900,7 +900,7 @@ int split_free_page(struct page *free_page, goto out; } - mt = get_pageblock_migratetype(free_page); + mt = get_pfnblock_migratetype(free_page, free_page_pfn); if (likely(!is_migrate_isolate(mt))) __mod_zone_freepage_state(zone, -(1UL << order), mt); From patchwork Mon Aug 21 18:33:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359738 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B108EE49AA for ; Mon, 21 Aug 2023 18:37:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2495F8E0002; Mon, 21 Aug 2023 14:37:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1AA2F900005; Mon, 21 Aug 2023 14:37:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 023DB8E0015; Mon, 21 Aug 2023 14:37:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E55B08E0002 for ; Mon, 21 Aug 2023 14:37:47 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A90EB14022C for ; Mon, 21 Aug 2023 18:37:47 +0000 (UTC) X-FDA: 81148970574.23.CEC511E Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) by imf06.hostedemail.com (Postfix) with ESMTP id D2368180010 for ; Mon, 21 Aug 2023 18:37:45 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=lgZ1kcSO; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf06.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.181 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643065; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+OXDzqobi26eycvYhGCWs85e727Laa8S1QvcjFdxFeg=; b=G/AGe9G0vaMw0hKcbf2PCBNOBy8YDM4IE7EaHTZZS8G5qeFfoJir/JapKJN7EpMt6iivNC enNJjTN2eNnEDBOyQul8NmDVExfLcLS85q6a9sTejpII1by1SuWs20I2NN3xHzAdeiw6x3 aEHxFYd/ic4FdDgpKmHV7/1aoqQRvBM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=lgZ1kcSO; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf06.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.181 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643065; a=rsa-sha256; cv=none; b=jwLWQo7QFJ5pdtbPCukMhgrlpbzHqk6aLSd9IcU8umtRJ25sOV+dq0zbBVGOBCGZPxZz5+ Qo3x5RyQq747geQy8Ur+tqtT1xCHE8n3ak4p/yBbbU9wkaUbufELOnQ+TiFBfp0RFU3dkp C5SKjmLFt9v/4zUAeZ2Dg6sQuNnWql4= Received: by mail-qk1-f181.google.com with SMTP id af79cd13be357-76da0ed3b78so130619785a.3 for ; Mon, 21 Aug 2023 11:37:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643065; x=1693247865; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+OXDzqobi26eycvYhGCWs85e727Laa8S1QvcjFdxFeg=; b=lgZ1kcSOohYpCTrIyKsC93cc4kIpl8ate6SPXQoHEqFEqxph+VzaCZQQveAtX6GI9/ bgOak/kNwG6ziWfjfOg6N67Hz/+yuzH8lI580AFM1epn21zvL8vi83X1L7YUHwzpGVvx YA01iPY8Lqhd9V73PD1m67deaPbteFXaTthOU6cezFmYs3ymg6qw5Hh/iNtsrcu+USPT CxJVbwndANUPXW6bupkU45WerFNpyqEIBPD+cdXldw73OZyfaNREF/atCX7sgh1JYBKf xoH6pjR9xo7d8NggsYfek3/nDHteNHpaNgQMI5zY7J4ai1BX6DtyFoJ3TFZZ+p3Oijh6 WKIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643065; x=1693247865; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+OXDzqobi26eycvYhGCWs85e727Laa8S1QvcjFdxFeg=; b=I5/kjO/+pHn+pMAGCC7rCsdM17+yH4YP8RbrmnCdjhV/3nkTmii+MzHZCrkJefQf02 3iyJUl7b5ntPFb8nzg2AYXoNzwt3lYv9GlD3ps8QFv90GcglOJZRYDEmBypiXVTGdKg7 ijXsomevR+syKjEB6qx7IWpQYhR2FvIsskoKa+0OFcOE8IUcjwRTnKY0LSput7ioFC7j nKe6K+MZCHqkgt97IqI8slK37T2P7Wiw41qryMUMdaPDU8XRuIzZKvt0+fkyGlCtY2o6 3okXfMxvNIn5w/CMYN6qCADSPDByXhEjitN6ZUjvLAIPmFv6GR9m7scEA9T4Nn+ElOCx 45lw== X-Gm-Message-State: AOJu0YzQ57H+s6ng62SVdY0kZCgf27PpJIFHUO4bh8NLl0XBLq/oeUCM XPNwWXXLdfCCpJ7nBSLmwfUg1A== X-Google-Smtp-Source: AGHT+IGJZ+zhp1f2bcF5REuEixNaaCLEE9Htf14o6is5FNd/PZdXWJXddgqEaBhHppgDrV4sWl0WmA== X-Received: by 2002:a05:620a:4493:b0:76c:e764:508a with SMTP id x19-20020a05620a449300b0076ce764508amr11514961qkp.3.1692643064872; Mon, 21 Aug 2023 11:37:44 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id os33-20020a05620a812100b007682af2c8aasm2664017qkn.126.2023.08.21.11.37.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:44 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/8] mm: page_alloc: remove pcppage migratetype caching Date: Mon, 21 Aug 2023 14:33:34 -0400 Message-ID: <20230821183733.106619-3-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: D2368180010 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: kgc93um37suy6165w9n1zedsk8hes9te X-HE-Tag: 1692643065-471142 X-HE-Meta: U2FsdGVkX1+qTWEN1LfGvuK2+IT8+4IptTxEFMzBdK1H/1vHMM7AJigrIb6AKP3Ai41pH2tSfbY4Fzs/FT+q1sXojmpncGRO3qG5B72j4UbW34i/EyjJYK+rK2XE+uFmU3Lg6nwdwSMTwPGUZvRK8j+8x3qnDayhK2xw4hutznKl3rX+/oIFxgfIaKEq8fFbnprmZp2Md6D5GFGAPXT0RnIsX3Ub25j56V0d8umQXe1ao7cWHAuoov54Yhjf3zE4tN5DfGP4+PLDaXO41Mv6xPG2KprdTY2ODCs0IVzKqVKvOxwOGdc3/aPNDm0YftbW4jzqNQFtrS0wfXsn/aX0danQ+GiVOa4uBRpBciK2neaK9uK32PCvsAfa9e+A8GmmT02TZu+qw0jbUqIADp3grLFReZUeX83NJhrvBuLY0G9lhOZeBnWgaKiPfULHWgpZDzHJWmYMbfGxvUn3XJXirZ2Dml0L3TIzVZLvwKOY5LP71PCxXD5hcFrY7nBEQS/AS2VB8xll+QgKo9lqQ7cwuu7gpK88eEZXPhwni/sTFnontyL4Q56cYx6orGfy6OHc2XbqvqNXsjYZEJs/OKC/1TviW59SlkSA33HUTznEHTujoMDx+6ROMu2hpuDUHf6HU0ExMyNkliraPlrnC9KWzXMrK0tyc3AGaxcjzCEtRKN3+aVzVEy9ssM1rf6Ye+vtQMMuj95J4JY2u5KgoDezK2Jikmoe0jC4h8sYJWsrdDmLd04kvso+7gu4jm6SEXItwPsf/5baQ1IS9ndw9U/QvVMeJji6kQpP3pwWkgqeNsPCg0xm5LpZMZK5SRgx0C1SuNmrm2/zMBIrqLdKZ6FWHICrUn6k1W5yav+tmZbnp6Num+cSeVVmgwoYH6ptoROg6X+PhOR44/q86TORb+MaqTd3vKXiHEfjftV8JWxJUy5U29cJkJ7m/KBH4RLUxV+JhX96NWhVr7KEKKmwzLe Sg7djN/r 6f4aTj9bJmXnuZ1mPl9QwdOxuBWiIZEdoe5RmJCkaZTk41to5RuopoIV1DrbFIyJAFqfJ/hkLh/EqdLD9HJzQbBMQHK2ladzX+WQFdpjm7hrlS5EPpf0VBHZI7u6INsE+kVAab9SMa/l7MRp6LxFw2iCy81woi8PJWKmdxWP1D+vEH1DkzGLRMRDT37/jLKJblXQFZNXqqgpeTuvkKqVr+G8Lzf89PIBJnFsBocAOM0+coUrBZARo7K5dWSnG5LW3wGm2vVWpvKoNDoi748yQ88MfVrVN9KsloxTly5MXDb0+AmN4a0GkHMMCFrUgZ+1XaS0QVxbhXbbiLOCDBMj9cLOU8tUqwgV5ZuN0qQ+dyiXBXtkrGUdK7jWX2rlEwP6fXyURlP251p11lPzJlchosgPOKfaMrTcGcodAl31/0w5N+WpyJBRgmvFyn82QSm/EdGXeedy1xi+Wl5eGb0tBQ9LXRI9MaRmvsB+W0WSWZVQVRYw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The idea behind the cache is to save get_pageblock_migratetype() lookups during bulk freeing. A microbenchmark suggests this isn't helping, though. The pcp migratetype can get stale, which means that bulk freeing has an extra branch to check if the pageblock was isolated while on the pcp. While the variance overlaps, the cache write and the branch seem to make this a net negative. The following test allocates and frees batches of 10,000 pages (~3x the pcp high marks to trigger flushing): Before: 8,668.48 msec task-clock # 99.735 CPUs utilized ( +- 2.90% ) 19 context-switches # 4.341 /sec ( +- 3.24% ) 0 cpu-migrations # 0.000 /sec 17,440 page-faults # 3.984 K/sec ( +- 2.90% ) 41,758,692,473 cycles # 9.541 GHz ( +- 2.90% ) 126,201,294,231 instructions # 5.98 insn per cycle ( +- 2.90% ) 25,348,098,335 branches # 5.791 G/sec ( +- 2.90% ) 33,436,921 branch-misses # 0.26% of all branches ( +- 2.90% ) 0.0869148 +- 0.0000302 seconds time elapsed ( +- 0.03% ) After: 8,444.81 msec task-clock # 99.726 CPUs utilized ( +- 2.90% ) 22 context-switches # 5.160 /sec ( +- 3.23% ) 0 cpu-migrations # 0.000 /sec 17,443 page-faults # 4.091 K/sec ( +- 2.90% ) 40,616,738,355 cycles # 9.527 GHz ( +- 2.90% ) 126,383,351,792 instructions # 6.16 insn per cycle ( +- 2.90% ) 25,224,985,153 branches # 5.917 G/sec ( +- 2.90% ) 32,236,793 branch-misses # 0.25% of all branches ( +- 2.90% ) 0.0846799 +- 0.0000412 seconds time elapsed ( +- 0.05% ) A side effect is that this also ensures that pages whose pageblock gets stolen while on the pcplist end up on the right freelist and we don't perform potentially type-incompatible buddy merges (or skip merges when we shouldn't), whis is likely beneficial to long-term fragmentation management, although the effects would be harder to measure. Settle for simpler and faster code as justification here. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 61 ++++++++++++------------------------------------- 1 file changed, 14 insertions(+), 47 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e430ac45df7c..20973887999b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -204,24 +204,6 @@ EXPORT_SYMBOL(node_states); gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; -/* - * A cached value of the page's pageblock's migratetype, used when the page is - * put on a pcplist. Used to avoid the pageblock migratetype lookup when - * freeing from pcplists in most cases, at the cost of possibly becoming stale. - * Also the migratetype set in the page does not necessarily match the pcplist - * index, e.g. page might have MIGRATE_CMA set but be on a pcplist with any - * other index - this ensures that it will be put on the correct CMA freelist. - */ -static inline int get_pcppage_migratetype(struct page *page) -{ - return page->index; -} - -static inline void set_pcppage_migratetype(struct page *page, int migratetype) -{ - page->index = migratetype; -} - #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE unsigned int pageblock_order __read_mostly; #endif @@ -1213,7 +1195,6 @@ static void free_pcppages_bulk(struct zone *zone, int count, int min_pindex = 0; int max_pindex = NR_PCP_LISTS - 1; unsigned int order; - bool isolated_pageblocks; struct page *page; /* @@ -1226,7 +1207,6 @@ static void free_pcppages_bulk(struct zone *zone, int count, pindex = pindex - 1; spin_lock_irqsave(&zone->lock, flags); - isolated_pageblocks = has_isolate_pageblock(zone); while (count > 0) { struct list_head *list; @@ -1249,10 +1229,12 @@ static void free_pcppages_bulk(struct zone *zone, int count, order = pindex_to_order(pindex); nr_pages = 1 << order; do { + unsigned long pfn; int mt; page = list_last_entry(list, struct page, pcp_list); - mt = get_pcppage_migratetype(page); + pfn = page_to_pfn(page); + mt = get_pfnblock_migratetype(page, pfn); /* must delete to avoid corrupting pcp list */ list_del(&page->pcp_list); @@ -1261,11 +1243,8 @@ static void free_pcppages_bulk(struct zone *zone, int count, /* MIGRATE_ISOLATE page should not go to pcplists */ VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); - /* Pageblock could have been isolated meanwhile */ - if (unlikely(isolated_pageblocks)) - mt = get_pageblock_migratetype(page); - __free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE); + __free_one_page(page, pfn, zone, order, mt, FPI_NONE); trace_mm_page_pcpu_drain(page, order, mt); } while (count > 0 && !list_empty(list)); } @@ -1611,7 +1590,6 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, continue; del_page_from_free_list(page, zone, current_order); expand(zone, page, order, current_order, migratetype); - set_pcppage_migratetype(page, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, pcp_allowed_order(order) && migratetype < MIGRATE_PCPTYPES); @@ -2181,7 +2159,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, * pages are ordered properly. */ list_add_tail(&page->pcp_list, list); - if (is_migrate_cma(get_pcppage_migratetype(page))) + if (is_migrate_cma(get_pageblock_migratetype(page))) __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, -(1 << order)); } @@ -2340,19 +2318,6 @@ void drain_all_pages(struct zone *zone) __drain_all_pages(zone, false); } -static bool free_unref_page_prepare(struct page *page, unsigned long pfn, - unsigned int order) -{ - int migratetype; - - if (!free_pages_prepare(page, order, FPI_NONE)) - return false; - - migratetype = get_pfnblock_migratetype(page, pfn); - set_pcppage_migratetype(page, migratetype); - return true; -} - static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch, bool free_high) { @@ -2440,7 +2405,7 @@ void free_unref_page(struct page *page, unsigned int order) unsigned long pfn = page_to_pfn(page); int migratetype; - if (!free_unref_page_prepare(page, pfn, order)) + if (!free_pages_prepare(page, order, FPI_NONE)) return; /* @@ -2450,7 +2415,7 @@ void free_unref_page(struct page *page, unsigned int order) * areas back if necessary. Otherwise, we may have to free * excessively into the page allocator */ - migratetype = get_pcppage_migratetype(page); + migratetype = get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >= MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE); @@ -2486,7 +2451,8 @@ void free_unref_page_list(struct list_head *list) /* Prepare pages for freeing */ list_for_each_entry_safe(page, next, list, lru) { unsigned long pfn = page_to_pfn(page); - if (!free_unref_page_prepare(page, pfn, 0)) { + + if (!free_pages_prepare(page, 0, FPI_NONE)) { list_del(&page->lru); continue; } @@ -2495,7 +2461,7 @@ void free_unref_page_list(struct list_head *list) * Free isolated pages directly to the allocator, see * comment in free_unref_page. */ - migratetype = get_pcppage_migratetype(page); + migratetype = get_pfnblock_migratetype(page, pfn); if (unlikely(is_migrate_isolate(migratetype))) { list_del(&page->lru); free_one_page(page_zone(page), page, pfn, 0, migratetype, FPI_NONE); @@ -2504,10 +2470,11 @@ void free_unref_page_list(struct list_head *list) } list_for_each_entry_safe(page, next, list, lru) { + unsigned long pfn = page_to_pfn(page); struct zone *zone = page_zone(page); list_del(&page->lru); - migratetype = get_pcppage_migratetype(page); + migratetype = get_pfnblock_migratetype(page, pfn); /* * Either different zone requiring a different pcp lock or @@ -2530,7 +2497,7 @@ void free_unref_page_list(struct list_head *list) pcp = pcp_spin_trylock(zone->per_cpu_pageset); if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); - free_one_page(zone, page, page_to_pfn(page), + free_one_page(zone, page, pfn, 0, migratetype, FPI_NONE); locked_zone = NULL; continue; @@ -2705,7 +2672,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, } } __mod_zone_freepage_state(zone, -(1 << order), - get_pcppage_migratetype(page)); + get_pageblock_migratetype(page)); spin_unlock_irqrestore(&zone->lock, flags); } while (check_new_pages(page, order)); From patchwork Mon Aug 21 18:33:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 221A7EE4996 for ; Mon, 21 Aug 2023 18:37:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 816FF900006; Mon, 21 Aug 2023 14:37:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 77691900005; Mon, 21 Aug 2023 14:37:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C82D900006; Mon, 21 Aug 2023 14:37:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 48F81900005 for ; Mon, 21 Aug 2023 14:37:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0EE591403DA for ; Mon, 21 Aug 2023 18:37:49 +0000 (UTC) X-FDA: 81148970658.13.531B153 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) by imf11.hostedemail.com (Postfix) with ESMTP id 2B7AA40002 for ; Mon, 21 Aug 2023 18:37:46 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=zgi7MRwU; spf=pass (imf11.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.174 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643067; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pMLPduHfWGGT6ewh4Swe7Rx903irp7oZR0cyociTE+I=; b=Kb9Xsc154e0t9B2tg1xvv24QRGN8K5i5bAqZhFkP1SLniAxjc2CkztyN/ZQgdkPEIg4UA9 Fv4eaIrWyY33EQIw5szEMrgA8vUA+w9A05ZNqbkXvkI8KtNE/bHa6JXH8apGymx/rJtXdV u9M7pRt6JD2ShG95V28FdsY57QYcT+w= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643067; a=rsa-sha256; cv=none; b=XAze68F6AyIdQ4hsSE96HBW2auL7G9+u+nnvwQjKYKr7Qij+w4OdwYeszjM0ep/YycxjYX EU6Cst0Yr8A/+8b4kVRJ42ik8wRER1XvUkzRr84KqLLbjEifARSMsO67CMG+1qV6PyDiON NSPgJM5oi3Uy9fvq+UW/QjrbR+p9jwE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=zgi7MRwU; spf=pass (imf11.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.174 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qk1-f174.google.com with SMTP id af79cd13be357-76d9a79e2fdso161348685a.1 for ; Mon, 21 Aug 2023 11:37:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643066; x=1693247866; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pMLPduHfWGGT6ewh4Swe7Rx903irp7oZR0cyociTE+I=; b=zgi7MRwUxuxU6l+6YWjAy02b4tuqcRki3BC55feePlInbqdpdKi7StCxg8XQ8rUlHV qmL0t/87Bm5h3OxPZpMbVir3PnUDYIcuJ4FTPTwg2FPhSmFY8fKiY88CHgVjPTbYLZIG 2DoN2pg+fmyDdEaeyvxyZ3ave1nYE95eRU9EGOtwqubG3M0SowaS80jCKCoaP+xr8d/F u/BS/51tAH69n0fVHUIkW3sy1+PKVJM5fl3G4a4v1K5sZmTdaxUBjIqf1pDgxrGW407t enI3crDHWIDAWXZiLOIWDD7vKqkHVmkYAgjZ7o57kJL6DPyevUUImgJQN1FC2BK0aIa7 0h+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643066; x=1693247866; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pMLPduHfWGGT6ewh4Swe7Rx903irp7oZR0cyociTE+I=; b=jy8hUAiVQr5puUDAZJxEeyeTJoLus2S7nn0st27MSUOFycS46tJveW2bPhSB6ylKqY MewvPo41Q/uSlBKHReSE3GFLnfODU1Cqi/3VfwTTG5oYM5qzwH+c6ruWr62at0/XnPUk YYK7oWSqgqO+T0hnr+t1q/5PWEBmtktkV+/zDNqWmBVN4l+nEAeJAGRikgwbJv19t144 A8siIwlQUq1FaKCXN43OaAZEdru8oNCl8as7WxIfjddebtaaN0bfyLECbnmrmMHIySvy zhaqyHzYbtP7vCEfIXURCSIO0drHn9PqmKdp91Y3mgHaiQbgfoco+7RTzRhMMMGidFnm g5aQ== X-Gm-Message-State: AOJu0YxqVQSqMNYkA0+B3IqFhuBrDQd8/fyuKjMTuF9aTmShcGJyQIbn +DOfve9aRhvbFvQZUxLpIJVaBw== X-Google-Smtp-Source: AGHT+IEfqgsX5taJcI3/zF6FUgi9VLti8jni51wseVE9vjrqfDRao4TKvZ/QW/DivW2ebVFb/vqJVg== X-Received: by 2002:a05:620a:4550:b0:76c:af30:3281 with SMTP id u16-20020a05620a455000b0076caf303281mr10650217qkp.10.1692643066118; Mon, 21 Aug 2023 11:37:46 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id w13-20020a05620a148d00b0076d25b11b62sm2388467qkj.38.2023.08.21.11.37.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:45 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/8] mm: page_alloc: fix highatomic landing on the wrong buddy list Date: Mon, 21 Aug 2023 14:33:35 -0400 Message-ID: <20230821183733.106619-4-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2B7AA40002 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: hgn76rupb6mwgfkps8nx97ybe5hnnefm X-HE-Tag: 1692643066-200581 X-HE-Meta: U2FsdGVkX194pgnyTtSMtX3qOcVvXiYa6CPg25WR8aKmzDphGodRf8SXj+/l9kGI6AroIkjst3cqLKkLzdv1Q59QMi4wXFnYiatKDrEzTXUV/0GSFfyuuqKSwMzCeKYQ9s4tX293YJvvM667yCXLddcIb6qBFHMezHEeEaamyNEEMyDYMsMznMpG8pnw57GXFBfVdpdzPZLgJwNujVHXF5hfnMtWt/LiyrLo2584jzu4yRzpEvI+5Uw+5lPOdRanHH1UXBgfBTAiemQFb2VFqFIg3vaPMy6OVtZ8r38mGnzsiv81iseZLbJJ/oleAfM2QHVBO9TFHfjgDCu99I93YlagleOhnFUKctk4GJmkUqyCokj5rSLS+x/MZZewCVccKHfPAUIkUdd4UjzIp+oUBfoVis6U+1gjhxlX3RmjYHS/Eg87nT9N6DHJsQWjZSeM/LlICaMSeY8igQ3hz8A1DgBAnBksXY4QJit4b1JKB+VWCOnkTym5fdB3cLlvieImuZY/MC7ioFoG9SvGBDrxFwGBJlV2NuWG4XzeNHOn+WCLYkNrmyJ5spPo/nXzPhv//gcd4tT3eS5kyXUJYzb9eS6Sm4znauAi8GnlydNrrWuxjtoOViKV9XJSLIGoDa5AD4oYnQOzsr2+g9gLqVYxndkk997wE6g7OZ6P/zhKI8j4U+lBIJ4nDZVNXq65RklM0XZgjzdLpt10FxfJu8Jz14SWaUrRfjD4y1D794w/HlJHd4nfNmG1GSCxWtN+QdK5tKM79DQ26g5ZlPzsNR43z0lmOvf1tq12ZeNZBWQNtbAdLVVlpsutdTkpAD41rd1Ye+hc4dEGOgnjdhCjXQIfMq1yO+LIQmNUb63mPXr6RnPTYk172Jju/QbNyzyB+LNvQq1ki56OR7ExgDgeyYZj1d+4lDi81R0vGKl4DV0bQ9dq5WUnY4BO4xCJtaugIFBJhU7bOQcbpUrsP3uDiSB mhrhncbh S5lPZpK7w4wi9XG2rHddCEhnLJMT8BGCOpyYzl2TqqRrYnGU38XOZYSEq2dEVxXBabT7k5TIPmyinwfsM9QNtfx2rhALlazyd44zF422Z7i5iBDXCI0kL8KSsWFCkhbCzTl+gzeUmZpVbMSj9gAm8XmEnQRgWOYAlkN6ik1X64hgP36/w2KXu4zZrLumLinn857O0Dd6GVtWNQ/XQyzLF7mK3v2BGFK7TzGGvjT2nD3R6+qp/g4gcKmU0559uoIkKH3zrko14p531Qqg2SP2NokvJxw80pVcNxWWdaT4R+JNgWM3XuyjGr5TL2kzDk1DaSllUNx96dBdJqapwy7GJnRFWA+KmOWTgwNfjkKAjYDPYA8N4ehKiVlxsAc7ZizfGnMUf+dgHCDhHHBQXcAd73YYPaKGsfQHu5cV60Hi+zKafQRxmMfD3n1d1vNMTjDaQp8Kw2qMbzLLwQCwczW2dMWO2A/nCVQFqJyrJUmGDMZkTR9A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The following triggers from a custom debug check: [ 89.401754] page type is 3, passed migratetype is 1 (nr=8) [ 89.407930] WARNING: CPU: 2 PID: 75 at mm/page_alloc.c:706 __free_one_page+0x5ea/0x6b0 [ 89.415847] Modules linked in: [ 89.418902] CPU: 2 PID: 75 Comm: kswapd0 Not tainted 6.5.0-rc1-00013-g42be896e9f77-dirty #233 [ 89.427415] Hardware name: Micro-Star International Co., Ltd. MS-7B98/Z390-A PRO (MS-7B98), BIOS 1.80 12/25/2019 [ 89.437572] RIP: 0010:__free_one_page+0x5ea/0x6b0 [ 89.442271] Code: [ 89.461003] RSP: 0000:ffffc900001acea8 EFLAGS: 00010092 [ 89.466221] RAX: 0000000000000036 RBX: 0000000000000003 RCX: 0000000000000000 [ 89.473349] RDX: 0000000000000106 RSI: 0000000000000027 RDI: 00000000ffffffff [ 89.480478] RBP: ffffffff82ca4780 R08: 0000000000000001 R09: 0000000000000000 [ 89.487601] R10: ffffffff8285d1e0 R11: ffffffff8285d1e0 R12: 0000000000000000 [ 89.494725] R13: 0000000000063448 R14: ffffea00018d1200 R15: 0000000000063401 [ 89.501853] FS: 0000000000000000(0000) GS:ffff88806e680000(0000) knlGS:0000000000000000 [ 89.509930] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 89.515671] CR2: 00007fc66488b006 CR3: 00000000190b5001 CR4: 00000000003706e0 [ 89.522798] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 89.529924] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 89.537048] Call Trace: [ 89.539498] [ 89.541517] ? __free_one_page+0x5ea/0x6b0 [ 89.545619] ? __warn+0x7d/0x130 [ 89.548852] ? __free_one_page+0x5ea/0x6b0 [ 89.552946] ? report_bug+0x18d/0x1c0 [ 89.556607] ? handle_bug+0x3a/0x70 [ 89.560097] ? exc_invalid_op+0x13/0x60 [ 89.563933] ? asm_exc_invalid_op+0x16/0x20 [ 89.568113] ? __free_one_page+0x5ea/0x6b0 [ 89.572210] ? __free_one_page+0x5ea/0x6b0 [ 89.576306] ? refill_obj_stock+0xf5/0x1c0 [ 89.580399] free_one_page.constprop.0+0x5c/0xe0 This is a HIGHATOMIC page being freed to the MOVABLE buddy list. Highatomic pages have their own buddy freelists, but not their own pcplist. free_one_page() adjusts the migratetype so they can hitchhike on the MOVABLE pcplist. However, when the pcp trylock then fails, they're fed directly to the buddy list - with the incorrect type. Use MIGRATE_MOVABLE only for the pcp, not for the buddy bypass. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 20973887999b..a5e36d186893 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2403,7 +2403,7 @@ void free_unref_page(struct page *page, unsigned int order) struct per_cpu_pages *pcp; struct zone *zone; unsigned long pfn = page_to_pfn(page); - int migratetype; + int migratetype, pcpmigratetype; if (!free_pages_prepare(page, order, FPI_NONE)) return; @@ -2415,20 +2415,20 @@ void free_unref_page(struct page *page, unsigned int order) * areas back if necessary. Otherwise, we may have to free * excessively into the page allocator */ - migratetype = get_pfnblock_migratetype(page, pfn); + migratetype = pcpmigratetype = get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >= MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE); return; } - migratetype = MIGRATE_MOVABLE; + pcpmigratetype = MIGRATE_MOVABLE; } zone = page_zone(page); pcp_trylock_prepare(UP_flags); pcp = pcp_spin_trylock(zone->per_cpu_pageset); if (pcp) { - free_unref_page_commit(zone, pcp, page, migratetype, order); + free_unref_page_commit(zone, pcp, page, pcpmigratetype, order); pcp_spin_unlock(pcp); } else { free_one_page(zone, page, pfn, order, migratetype, FPI_NONE); From patchwork Mon Aug 21 18:33:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A169EE49A8 for ; Mon, 21 Aug 2023 18:37:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BDA0900007; Mon, 21 Aug 2023 14:37:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 46BBB900005; Mon, 21 Aug 2023 14:37:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BABA900007; Mon, 21 Aug 2023 14:37:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1839A900005 for ; Mon, 21 Aug 2023 14:37:50 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E819F1C9AC1 for ; Mon, 21 Aug 2023 18:37:49 +0000 (UTC) X-FDA: 81148970658.04.2C17FC6 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) by imf04.hostedemail.com (Postfix) with ESMTP id 250CB4001E for ; Mon, 21 Aug 2023 18:37:47 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=uu324QBc; spf=pass (imf04.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.45 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643068; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lLUpAEhwdHses5nFOU0azVoQ7mhHkrTPDw+MACyT3Uo=; b=0AHGxHTwL2qcubyekOm0oIusCiTcMuo8EhJqpVy3KfXNUk3TKtQy/q+lshkptEb7RNYVH9 iRe2bAP66JrlgBJntZQ0w60/2xCyqa4Zw9STlEJ1OVuBd4tp8w1FnXwHl81RB6s+OPrffX I/rNHVfq10HLH39e29UXst/W8usDhPI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643068; a=rsa-sha256; cv=none; b=f59UXo/GS0PO9rskA/N+8ixpRkdq4vmB/hlhwDUqP5roRFOIj5QFi5G0RgDCU+kaX+Oypx 1DMc/tH+I+53VbMj7Y1wS31i2GFOso+OzCl3qCc8fNIUBCJ5daRrqRlXubg85TplHnOKrG Fh8qh9SWMFjFijK41ktNYwwpQS+YFMM= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=uu324QBc; spf=pass (imf04.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.45 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qv1-f45.google.com with SMTP id 6a1803df08f44-649c6ea6e72so23018446d6.2 for ; Mon, 21 Aug 2023 11:37:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643067; x=1693247867; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lLUpAEhwdHses5nFOU0azVoQ7mhHkrTPDw+MACyT3Uo=; b=uu324QBc+rtb2rb6ojdEsAGPTmfCn1pVoKqBeTm/7P0dPa53c/018QiTZAxvjfk4tR gd3isXbWBnvmXwKnm/HQBQdv4CAkaT2elzDp9mjtKLdLitCtyeNEdOyJDD8OSdoNt9gt mDcPb4pJbQH+QjEmQqQQM2JBTmXebCbCMtWni1qZHwHNDASVRcK5jqvoscTv3vRHHDcn EEIaW0f4LPbjd1nU7A5uR+22sHHRvxqew4mEd3GgeorJsCcDp+bzTfVNcKiiZjTxI3MX GCyYm3Z2VbUNvA1yCSYnin8EXYwtYy2a0maw6rJNQldN7CHPfZ3e+BwVlaLApl+QVHTl V/3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643067; x=1693247867; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lLUpAEhwdHses5nFOU0azVoQ7mhHkrTPDw+MACyT3Uo=; b=fZxi+5ZXOYRBZNoWPiW4dlBuZO+c2tj7T1hTbrKrHB0bitnj2PjrDk4lrhx93urBKH 5EaldDdFddFF9wF6igeNhb61f5DUFmRblqw1jiU5O3/+hnAvuPl+Wb6qb711/ulQY1f/ OQJIOY487kv0hPAC3mSJwFSSX9NWhDw18rt+XiRapg89ipy/fF/7RpOskZPxMVcCiW5w Dgx4D/PcglDcBXaMAYvxfVZWnEzMgpQn7aUI5EMQ0ntWXTqP03pjyu9o5NTfefkSGlvC NZStHU7EPJAzKBFsN0ZFVx9t7JKLgkqhWmrqFCMkiB0zFauuLaAVBiSJxl9Y0Xp2DvYS jP1g== X-Gm-Message-State: AOJu0Yw6oHmwwqdJ9ohg3OGaTKZ61bIBOfRP05HJvu96hFK+jbv1Ksy0 wy/7/7GFsY/+KRrZ2xdtw4WddA== X-Google-Smtp-Source: AGHT+IEJOOhim3Cp793d/K2ahJpfEq1PXlo2PZVHOqjBu5gwxhsUCzov2/4CrV0FJmCuIF2w2QYyuA== X-Received: by 2002:a05:6214:192c:b0:647:39f1:5237 with SMTP id es12-20020a056214192c00b0064739f15237mr7472065qvb.47.1692643067350; Mon, 21 Aug 2023 11:37:47 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id d1-20020a0cf0c1000000b0064f46c719fasm547661qvl.31.2023.08.21.11.37.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:47 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/8] mm: page_alloc: fix up block types when merging compatible blocks Date: Mon, 21 Aug 2023 14:33:36 -0400 Message-ID: <20230821183733.106619-5-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Stat-Signature: hqgtduduayyjzf9o5euoestdm5mpyaoh X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 250CB4001E X-Rspam-User: X-HE-Tag: 1692643067-721428 X-HE-Meta: U2FsdGVkX1/P6sT9+MAb0yGerpaAJWOvkchC/dZr4EEbtZkqgAnsTwcXO0q+gsWCTAtHGTLrghDbxkP519VAVzlEv7bCoMA0sUNzNthUZEEixOAe7BYCz8X9ABVDTHBXbHsPKsYoBcGVq9x+YHEH0uBlJQUX5SQLTaQHyfa74OVBcvU0wVKzqGt97YSMnd8XzXVzezBl6S7bmy6TN+xUTp1GhVYhW2hkUqYF+aW10B67UrYXzl7Fpa44OsOClM5YRdfDu6EHZIQeMiE9YO21OPBxPiHLPZ5rd5uOjvVF7pvv8IRpnQQOQ/RO8UZNSFUjq9WZueyGlXIEozHEK25xl2I6SEQbtubLeudnV0LC32s8PUosg4nIzxvPeui4YuuJICV+zEQ5OI+gBb7Gfj2PSM/lATfClmcUPH9OhIfekCPaWZF/XbT8XW/GppCROF+JPxZ9J5DGMxFtQWbQ97OGPkX89ZC9tzST1K4Q2MRRsPQkFFs7HaWQWEPjKPz50iODi80QdAco/oGBnOD2rqy/wt11FBFjcfBO+udFikDr+SzUcyXEelyvs1HVBebC25JcCp1hxfJHM4YDUJOKlsK/lCrSVPNCkKXLre3MAi+kZfyMTw7ZxhoZ5aSDZWrYWbf4PZ3Nrs9vewkDjx+kDo47kuWUMyIjKZkl/vprJA+cVvByTlu5mtTebm5dHZ+vCzxOZadfEFPo8imdPqzymMCr8DD0bhaikhuqqMW9VFU4SHNdDrNQSRZgmuGwRL7QYDkkYP6VKYdV5kQnzxN26F/eQRJIU+7PloppvODSR66kAeAXh7B7D3R7trUZ3xTymDKHXeCLUk3+f/rSdxWFeMmxVwEyV+Cy+UfuA4yIAetmbxQuePZ+S3PKoUYIoZh3Im0Vby6Eounue6pDbiegT2iA8WZp1dbyaI6Yl7neXPJNh2KdIx52yqJgrPh0BFyVtTDgFlsaYRFrACWCuXJ8K8t hWxOcwbc GI9dM30KemqKqZQPFOPchdiNGuJrnQ4WSQH0LYWNHB5/aDDDu5zZroiadjqcf+K3LGIA/+nL1eAkt/T4at5Pp6i2dye4MEQThU3mcNyeXM55vAWkkV+y7eVR6GvK1fgoh88DgxPh7p++wkSNO4tibjhuradv9KQKDSawgr/Nm7NPD32L24ZV8WZqeXmh1fWQ9dduAHxPqaLJtUeR4iMCsbjmfGndzYbJZfP4ai8KhXE49MzvxtEJ/PDkmBNUnVNWHeCzYkUNl+iTE3Hswnoin7YQQeX/GGq3QNooHg+a5odOQvJ6szsEeQ8aGcawEyHsIO3V3Ve4RyoVkHJj6WUibOk0jSpsEtRkF9q6rCfrkYZcyNRqRzVWlVVlOucE05pC4Li059mXSRS31+FHegYMGul7+OfPMsXIuNhbYk1fZ1BN/3hmFq+IHxIzpGPyB1HhSKTMoUT0Et//TjE9y/YFl1q47AVSKVw+acZiizD5iWsqG7MI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The buddy allocator coalesces compatible blocks during freeing, but it doesn't update the types of the subblocks to match. When an allocation later breaks the chunk down again, its pieces will be put on freelists of the wrong type. This encourages incompatible page mixing (ask for one type, get another), and thus long-term fragmentation. Update the subblocks when merging a larger chunk, such that a later expand() will maintain freelist type hygiene. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a5e36d186893..6c9f565b2613 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -438,6 +438,17 @@ void set_pageblock_migratetype(struct page *page, int migratetype) page_to_pfn(page), MIGRATETYPE_MASK); } +static void change_pageblock_range(struct page *pageblock_page, + int start_order, int migratetype) +{ + int nr_pageblocks = 1 << (start_order - pageblock_order); + + while (nr_pageblocks--) { + set_pageblock_migratetype(pageblock_page, migratetype); + pageblock_page += pageblock_nr_pages; + } +} + #ifdef CONFIG_DEBUG_VM static int page_outside_zone_boundaries(struct zone *zone, struct page *page) { @@ -808,10 +819,17 @@ static inline void __free_one_page(struct page *page, */ int buddy_mt = get_pfnblock_migratetype(buddy, buddy_pfn); - if (migratetype != buddy_mt - && (!migratetype_is_mergeable(migratetype) || - !migratetype_is_mergeable(buddy_mt))) - goto done_merging; + if (migratetype != buddy_mt) { + if (!migratetype_is_mergeable(migratetype) || + !migratetype_is_mergeable(buddy_mt)) + goto done_merging; + /* + * Match buddy type. This ensures that + * an expand() down the line puts the + * sub-blocks on the right freelists. + */ + set_pageblock_migratetype(buddy, migratetype); + } } /* @@ -1687,17 +1705,6 @@ int move_freepages_block(struct zone *zone, struct page *page, num_movable); } -static void change_pageblock_range(struct page *pageblock_page, - int start_order, int migratetype) -{ - int nr_pageblocks = 1 << (start_order - pageblock_order); - - while (nr_pageblocks--) { - set_pageblock_migratetype(pageblock_page, migratetype); - pageblock_page += pageblock_nr_pages; - } -} - /* * When we are falling back to another migratetype during allocation, try to * steal extra free pages from the same pageblocks to satisfy further From patchwork Mon Aug 21 18:33:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53256EE49A5 for ; Mon, 21 Aug 2023 18:37:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD6BE900008; Mon, 21 Aug 2023 14:37:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8380900005; Mon, 21 Aug 2023 14:37:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B0D3900008; Mon, 21 Aug 2023 14:37:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 71662900005 for ; Mon, 21 Aug 2023 14:37:51 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 398E91603E3 for ; Mon, 21 Aug 2023 18:37:51 +0000 (UTC) X-FDA: 81148970742.23.FBB9CC7 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) by imf16.hostedemail.com (Postfix) with ESMTP id 69247180013 for ; Mon, 21 Aug 2023 18:37:49 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=tTKq6l6a; spf=pass (imf16.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.45 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643069; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZngYrrpTqTVsoxTWaftFpu0t3D1kL65SFXqeVP8LffY=; b=rZ8XXHxueL26MNPDi1wh++otCkyHAslWgbH0BCOcV8EdnDbvy2EvfHIoMLkj+Nu8CFBvLI PhN1L4ZOsJFhBeA637vYz3eP43isTAsfPd7nn9GVSNOvPp/kevbcQOiH3nyjO5RPINxMRn s6kyrv1lfGn9+We3Lg8fTz4bxNSGP5k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643069; a=rsa-sha256; cv=none; b=ypnY1B4jMqCdqpm0QWVS5bcDBEPHgwt+JS9FgB5mhw5s/Oj9G4pH2W7l8KFfoF8aFmrBKl oX3BBKLjuKYuk5au+GpxvVdxsxyiUklpoCf/BViDXtYxOTQ0aG2KgC2fjmQgWZIEui5cHV Vft604WmzQhu4n9Loz3wdj0d8YfJy9E= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=tTKq6l6a; spf=pass (imf16.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.45 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qv1-f45.google.com with SMTP id 6a1803df08f44-649921ec030so22012716d6.1 for ; Mon, 21 Aug 2023 11:37:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643068; x=1693247868; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZngYrrpTqTVsoxTWaftFpu0t3D1kL65SFXqeVP8LffY=; b=tTKq6l6aHUSFxqQAsy9JwsqI5eXHQGdiD0FODZRCZUUeuFzk7B/PGOmI2Vs9szLK+G dEC4uoaY+bYkZk7tXrTTN84Ya1UWN1T1ntmz4BwiWpAaTDv/KjLtiU8xN7cCLXugYnxj LJbOr/gs/OUqybReoZoBcLxK7V0Pt3TD22y3t7PTjTmBG3+EYcaWeiiG5KKdHp8G7xu4 gGQs8hxtzZGVFfnPYT9iv+Ahh/wRyldbmPvTOfU+kCq2MYvvfryQ4MlloTXLfKY2VBzt kO8BdLOxDu9kD/rqKeB1BSsNV0xnrL1jQgNg74XoHP6TFhE2QWb0o5caPuzLs4GeMiZG eWpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643068; x=1693247868; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZngYrrpTqTVsoxTWaftFpu0t3D1kL65SFXqeVP8LffY=; b=i9cT52IEBAVGGoq4dVyIxOjWAjIR3XWSv9Ko3EYDQaZYK+0Jii3GWUWab0weaeNp68 7S3YtGcWX5FIfqFCw1fTePe7dpYUMwOehBFX0FlYPRtdyncBEz4jb6TlcLzeac00yReq ozMSbBJpuOx1CfrTuFwDIpWrgJO/fyCERqfXoJISJMuLd0SYEJOrf7FvcaXG3nmfCTgI hSZ16zH0Nwc9z37rhKDNUkgbdgxSy4wQa4yeFbewNQvgHMF/G54oFUa90yLLLn9jxW1Y XXbGftdUmU16SH6EAr1S4VjA9TqkZqAqRYpUQcZnU9qHyTxjiRdevRT/xYDpuVSFLkDi Ifvg== X-Gm-Message-State: AOJu0YxojZftUeE5j7ajA9mWHwrGweiuRAAi/nargPswg/0iVkwCxecY lVb5dD458Ub4pnMn2laFZhMFpA== X-Google-Smtp-Source: AGHT+IEE9dNMT5yZrqH0G+cbatGuZK0wFc+nYEOict7vJbVhdu8eLSChdSchtQ2QFxRxmtKi+qryTw== X-Received: by 2002:a0c:f092:0:b0:63f:bf70:678f with SMTP id g18-20020a0cf092000000b0063fbf70678fmr8857363qvk.59.1692643068549; Mon, 21 Aug 2023 11:37:48 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id h16-20020ae9ec10000000b0076cc7219ac9sm2650582qkg.7.2023.08.21.11.37.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:48 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/8] mm: page_alloc: move free pages when converting block during isolation Date: Mon, 21 Aug 2023 14:33:37 -0400 Message-ID: <20230821183733.106619-6-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 69247180013 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: mhqjasqiskm3r8d7u169uoq6rm4cc5wz X-HE-Tag: 1692643069-681815 X-HE-Meta: U2FsdGVkX1/GQzd0mcnwW0CE3SRg7w6m4d8yVgAPnDg8PAxGiMB07HCuUnlWsX9+UdTRY4+/iMT5JlhqLAs18ZwsDal6CWliS4Jc+1erBXRiyGQnktRvJ8ZOfjFMt8P6xORj+cXGnN+rIqSqyhM975F6yJaUwfUvJBddJnlr+giYwr5mxMzLW7uxzOOKilIu6KcRehm5bOXjuprs6NSPslMCVoU/h6+cKqJ1dWWuT2tVKCudvgN/93tX2vftSK2DSllkb11lBCPyYzyEIGOrx033rz+ITSkx0+Rs7AJvzKWbddzzY9y6cu2BMBNbdcWiWS5X++bj88vXDO1Xx2EjduyMydbAOnCd2/RqAmQVMZ5D+X3VPVcTCEZh5q2HHKRtmMCMNacbn8OjSoohhZwdi6IqHcJPDjG7tpq8zzjQ92V4EY+O6FkbygoyxGqH/CYL+pGUTMrSDE6rgXXVCAkho3A2sObwzNsSjX8ShAqdTUPZQh76JYc7SXTj+6yLzgIGtvjD00KNOPYxpg3Lo8+bhk7fuAiJ4DjlZxp+yjsH2/vQ1HicXLJc/V9lNwWXxM6sV1j2+33obMw5NtBPR5ktLKPJPjkcSBG3+ba0HAaBD9O0Dj9OFohYMRcdCsuBXyqI0nhe8s/mLCERAx8blty7lFClyzH6ScFdKlP4WlrdozSL819BK339Y6WAiDDrhA46QaX3LRMTrNTMsN4IAhmIS/kXsJ1Qj+lcu5/8Qu/D0YMnsQyK06HHLcmVpgjIF/84uXTyB/ZGlb2G+0Tv0E4aYO00rvoY1KiKCvepKSyttO1TfNExCqb3zP39lnpLx/DlrsAR8vXAYqzGB3vXk5ue32oSR5Ou3QvTagLyPsvJTP06azcRa0lSYvBrm+8TdWStwM7bXLCTpHiQG14kFgWFPM3KckAmdCEdNjfKg3LUpHV4DvJOeU8qV7zbQ86DSFFqkVuWm7YvUvNXgj8egED 4BMOyV63 V1bIFjvkCeMejY5SrBVyJChcOmI0Vws/6XLHlEBITlC63Q5dGXrY+L3y8Av13SSXVYRk5j4VUmOSQ4YTYSzojwTJSh7NhFrA0kSE5ZXXnoLeH3k7HUOOCSyiBOSULB4EWNflFNfrmkq3789s289wCiIbhZqOYZ0lZoVY+h4tT2WlJYb17Mr+Oq9p5e8f99/hxN+tzPEP6CCboix7qdqpTN6C/uTqHQtI738noctPuavZMf+VlcjsS0d3Rskdww4s/MzMJW2jU2y3rW2RQBKkNN99ZQxeCxIhZQ5/b0dSGPEJPmvbvPRqrPHzzwlxYz3ZtNxniMsPX2nGAjzfeEvsNj0mzv0mihaFSuKFmL8BPUajDq1LWqi5si7YrllTABYUzEQghLSzGGpnxeAEcAA24DsNB4UhCnVwf3rlVSqV+b9y+RHruoR7lkGO8lU7N6FLgp+dNlpWVYmKG7wPbXcVof4WnKFMXcNoe41ZuaiAZRdkfRyc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When claiming a block during compaction isolation, move any remaining free pages to the correct freelists as well, instead of stranding them on the wrong list. Otherwise, this encourages incompatible page mixing down the line, and thus long-term fragmentation. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6c9f565b2613..6a4004f07123 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2586,9 +2586,12 @@ int __isolate_free_page(struct page *page, unsigned int order) * Only change normal pageblocks (i.e., they can merge * with others) */ - if (migratetype_is_mergeable(mt)) + if (migratetype_is_mergeable(mt)) { set_pageblock_migratetype(page, MIGRATE_MOVABLE); + move_freepages_block(zone, page, + MIGRATE_MOVABLE, NULL); + } } } From patchwork Mon Aug 21 18:33:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D539EE49A6 for ; Mon, 21 Aug 2023 18:37:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D451D900009; Mon, 21 Aug 2023 14:37:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF556900005; Mon, 21 Aug 2023 14:37:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B713A900009; Mon, 21 Aug 2023 14:37:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9923A900005 for ; Mon, 21 Aug 2023 14:37:52 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5E5F61402F9 for ; Mon, 21 Aug 2023 18:37:52 +0000 (UTC) X-FDA: 81148970784.16.FCAB9A9 Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com [209.85.222.182]) by imf16.hostedemail.com (Postfix) with ESMTP id 8F4E518000F for ; Mon, 21 Aug 2023 18:37:50 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=t+wh1mGs; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf16.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.182 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643070; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OQIFQGhOFbBlf64eoEhx/6zam9QXvcRrDKvqgUYhMRM=; b=W5joY4AzdcKFIyikYgXRQE4E5EdQOK1Zus5EGijMGFM6kIguqJx2uyf9yEdMibWCl6Z75G FYofV3kKx6p9Pp+wqhUSciu2V0wFKTmybBmWAHzCPD/r/c0faJJP2PfTwgmV2drq82x7IU Lf4qCFp2PDR6MeZiTR/VMM0KiQMuAOU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=t+wh1mGs; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf16.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.182 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643070; a=rsa-sha256; cv=none; b=MnL7AQMIsiXy/47wJK3+yD1dj4jPREVR6yZ2OKXp660w7sB0riTMGrm9d5bdXR+kShiwmG RfdFQdT7jp2kjsUpnOCQGLyfQa0hcST8qXKD5qnnHODCaNPUPTGVKMi9LQKf+u+p3HYIa1 DZhHcr6hybxX+OkBhWsAxsQgk3ztBO0= Received: by mail-qk1-f182.google.com with SMTP id af79cd13be357-76d9023c942so206917785a.0 for ; Mon, 21 Aug 2023 11:37:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643070; x=1693247870; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OQIFQGhOFbBlf64eoEhx/6zam9QXvcRrDKvqgUYhMRM=; b=t+wh1mGsSpug9tSfvVbtQ0+ZT7w0zsIr5mQvD0qoxacPfoOB+g2smHBIaDUAnqGCFs nIfnUdLtFYhQBZzhdHCTMaFYxsrPCFH1JfaNCChGC9YbwVfc039vhqD/6cfA3MYQnjXH VnrReFpdUByuwNsccH6RNDOmypzGn9sVNUCpx57arQCc0x4tVqKXgIGYkDFV9KBaqjrQ pxv3g/whFqiiwVnugwxvXEqJIJv2e0lbpLu7n6wEiONmvwSrNdWgKg0k0M5tjjCMTiqP 8klZP0Q9u44cm9eKrDYTJStOMjv0qClBRgbJJF2O0USOGCxQqFT+VQ3RhL0tCbvB0de1 67lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643070; x=1693247870; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OQIFQGhOFbBlf64eoEhx/6zam9QXvcRrDKvqgUYhMRM=; b=JCGeWBMrTIOZFISFhseNYlt4Nbgr15kNYfpAWXlaXFmnDo0nETiQrzdcOBj5NhPn3t 2WLrOQdiPgp5utr9OnU3o2rjKvKKVeWOf1eSGQHmtGsu6ndJfEgkDB7LZicbEW5d86SQ o5epn7PamNBkNcdwFX8WnplzRmz3mEvDFjgEN7CegQklTws73ObHk19/j1BWbg7e9bZD w/rIBsoG9g0EuZDgKKfg3fB6CwTVqYsuq6vkseLEaB/2KWPPZPkqfhActMaR3vENSDXj ZiCqwCs2J1MKa6fRhtRk5zle9FJ8fXyudDAVRxVMBAb8tJZMkWGstaoFvHQ2RXsXeLc7 MXAQ== X-Gm-Message-State: AOJu0Yyx8xTD5maEoP0z/cwjTjTPQ5tc+uH640/8eUyM0DXkpfoZgd6j H5DD/EOMgn5BiTxoEWCPmL5MtQ== X-Google-Smtp-Source: AGHT+IE91JFTBmOR/4UxoEL0MhPPZyAeK/R20HWilTQEJg9pfeBIBxkRJCzCQ4B/j3c4553iIqhoEg== X-Received: by 2002:a05:620a:4407:b0:76c:c90d:2ef0 with SMTP id v7-20020a05620a440700b0076cc90d2ef0mr10398847qkp.32.1692643069805; Mon, 21 Aug 2023 11:37:49 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id o10-20020a05620a130a00b00767c961eb47sm2648001qkj.43.2023.08.21.11.37.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:49 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/8] mm: page_alloc: fix move_freepages_block() range error Date: Mon, 21 Aug 2023 14:33:38 -0400 Message-ID: <20230821183733.106619-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: aeqzg55jebpfhggmagtnn7perxdcahny X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8F4E518000F X-HE-Tag: 1692643070-810506 X-HE-Meta: U2FsdGVkX18875KxNnaXEZRYM1kBhWLUziDCkxk+c5+w+O1tmrZiYYx0FqbW/z17jqFkwj3AKcnnJwxY2mfH5HF95iL4hTXTj9uRDWybilHpiJCMTPopAIzsM4T07B0Cz9rlrAfz5BOzTJ4CkSHUNeGAS9C/H5V4iH8LAGq+YdHU0tkfz8KIeJpMufrtCnIUfagbOuIkW1iVB/mkGG1MEgPiIjdaC+0p1lF1Gh/X5gl0sY2lDJi1qMZQKJIHX1bIiYJdWdFYv4bZh6rqTb5bzl1Zx4bvyLORfcCT3tAmfDhJ1geNZURIyhUw/3AXldEddQcALZUHMwXgpRGh8nIMKCOoCY45FpH/20TlkbCbAcGGrAF/XKWr3OBIeNBwo0OKXIplNoRMm6NFWhBabLzqns0CXajc04SGkZjbmn3tQ/clptVFe3byEb6G0g4ElTr7Kgy7smKqa9Kzd5pC4QXgTGXWbSbahnc4T0l9NeWkIKTN1iwNafRV4iPmnDhEbOYubJ77xQGx+59XWNPHGJh85TbfeV6EsJLvXa0jXrTd+n90fQkKWEoxIOr3G7bSynqnRxE8Wx6PzGYvTWyIGTfcSNs3KbXisTRZvPg3CY09qFS465NyTwbCP7jlBfBx5aHEupZIumF81XaFTUCfAMW8eD/eG7wlEXmm8NGxgb1avDmOROIFtfOxKxzZkZVGtr7QDFPTAiaOfKCAz0kf5HnBVrULdvoaZSlO+C7X0lqDzX75WTM7VnUYSVlUW9269ewmBg3/bixEjPFBA6qUVQvbeJ6ufGs26mDRlA6QMigYYaSV+fACiz1cVxpCiJVUn+reoe9/6R9eMerBXc5KUM9hpz63V5A9Jx1bUZmNyDiyWACBu8O3xwpohkAY/3YMIVObJQFl+pzMjnRCYFxS3gyhP3nM2YZT1PKptvXouxsixuwy1zXQ/48Lftmr0HVoaxSsuabcx68owb5xo4dLB0B tZhT0Zo5 y7gb1hmm7n/8/90ltqXTW75UyQsndQJYr1uSsNO6xMs99740REXDDlERbKgDLoylEPzWOIiCb6MMiDv1ShCJAYvWahxuo32zgchYrY+cgqfncf2K3Xm9VnGomfKo7bzgrJWDJNn7VMhlVur1x8v9D95Vzu2ae4cfwApAJBFgsmur9ToEz00+CwY2sx8+GGix7qgmQ3FCoeviec5pEyZKbo5G/4KxCeQoWGfv7nnBzrclRzDYzqoQLWxUGd9IjXnQ/LXkw5uAV7TWk6Y7JypAUyITVltNuShO/vLVQAhwPOWKc/euClJdEM8SXfbtNIcS/H9ExvuQVEelh+Nfx7UFR0Vshbs6qInxAa4wiE6gJMou6G0c94rE7+2CRvwFD8SmXUHcbBmSBu4zQe4EWHOnKtdBW3B3VBjQfJY/gCxnoQbeed8pt8IrRz9yXfomXI0YXGgyV9uX4NeGHVYCDyiL5Dst8Hn5A1j/1ie2vERkpH9oCUDg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a block is partially outside the zone of the cursor page, the function cuts the range to the pivot page instead of the zone start. This can leave large parts of the block behind, which encourages incompatible page mixing down the line (ask for one type, get another), and thus long-term fragmentation. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6a4004f07123..6fcda8e96f16 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1697,7 +1697,7 @@ int move_freepages_block(struct zone *zone, struct page *page, /* Do not cross zone boundaries */ if (!zone_spans_pfn(zone, start_pfn)) - start_pfn = pfn; + start_pfn = zone->zone_start_pfn; if (!zone_spans_pfn(zone, end_pfn)) return 0; From patchwork Mon Aug 21 18:33:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A803BEE49A8 for ; Mon, 21 Aug 2023 18:37:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6667C90000A; Mon, 21 Aug 2023 14:37:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FA81900005; Mon, 21 Aug 2023 14:37:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4448890000A; Mon, 21 Aug 2023 14:37:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 231CE900005 for ; Mon, 21 Aug 2023 14:37:54 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F17C14038E for ; Mon, 21 Aug 2023 18:37:53 +0000 (UTC) X-FDA: 81148970826.03.BA124EC Received: from mail-vs1-f49.google.com (mail-vs1-f49.google.com [209.85.217.49]) by imf09.hostedemail.com (Postfix) with ESMTP id 27AB4140013 for ; Mon, 21 Aug 2023 18:37:51 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=kgMeXntx; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf09.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.217.49 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643072; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8oIQcd8rRXP7RBwvXlPFe1yS+MOvZC6u1A+9FyFyqFE=; b=N7+t1RrQ7dHYrGkPJCkFte5l4h3cAvDGg4F/zQxlyPnT2Kmkp7NoD8/DDo6ARI6wsIQbgR K8Q6bG2DvH+cnZA91k1FsnFk3jcK7C/Fy/Gukes2wkKQxNEpjYDlK7UgBWfeJ+DthYpfXK qaP4EuGpILWe84q6KsfpbLpDMF9NM+o= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=kgMeXntx; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf09.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.217.49 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643072; a=rsa-sha256; cv=none; b=yUoTFOB6ggRmy6e+klHrMYIFUz+QF8T55rLvlK3RNJ7dTDpckr+/Q9bdAaj9SHtJ4jsPQL Z1gzpCvQzbHdbfRKO5+xBT5ILy6jzhuBY+7+s6ZvHgqMo7Uf3wx9ey6CK5TRGYGTtFt31X YBsTpYTuKZWRrQmSknJKx45rLMOqN0U= Received: by mail-vs1-f49.google.com with SMTP id ada2fe7eead31-44d4a307d30so404955137.0 for ; Mon, 21 Aug 2023 11:37:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643071; x=1693247871; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8oIQcd8rRXP7RBwvXlPFe1yS+MOvZC6u1A+9FyFyqFE=; b=kgMeXntxKQR8tuNiSCez1fknacAOE/h+FkxoUZt3pChYph14NTG2kuLWqggZuVBkgz 1TPK0+Ib0YstWWTPm5g5jrkWXyurGI517D2VqyFPKaUup4SiiUF/iYe6sozGLCAbGusy Cuk/X/7jw5JoTa4djmj1/aXJFiK3LI5OM9K8lOShOFTz4qROpE3sQQ4oE8kOOxG2S7TT 36JcX9dQCxx4kGv+up4S225n4jA/0jes8XxNH9tHySvNQwEOa8lCM6bYsyxq+iKXNyki Z2sTPO3iUOXqC/u+HUrO5E8bFfklkWce1iO9kAY5GZ36G3sBYdwZeb6JsZTJ16647Ac3 wblg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643071; x=1693247871; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8oIQcd8rRXP7RBwvXlPFe1yS+MOvZC6u1A+9FyFyqFE=; b=h0Ug0HDd9iRGqVUMVCyzTy7kRX862I27WzIHK9osGN7G35QOaPzUfUNJzawJlO6Mwt g657zzlOIOwHY2XO0xa5DnMqyOAmN+Q5+Xvi71/Ao1Blx9uwfRUcWiHDnryULogQIn7v R7xKsPZHqX75Xs3bRfYf4SYXfBpuPuBrZdv/bsy+u7Tal9naus9+Uhl0ZTBZXIGhSqqM bnGI+sZqnXGq6SQ0zqRvfRlscq0xgE4FaLuXmcODguaGbfNz/fqryLIma9TPhBVluPVx UGz+7Vh499MyukuK1Jsd9gB2wm9Uu1bUJ3/7Nr8s6d+bbb9179efGWb24f0DRLRdvbHw 4gvQ== X-Gm-Message-State: AOJu0Yw0lhjFyPb7A1PNbl/kojieoUpOFhcxVyAR3dMPJOm9sZb2HXI4 YIxstAU6qBpBvIjJR1HkCeuQsg== X-Google-Smtp-Source: AGHT+IHy2SwpHDcKAyBf07pwHL85neqFey+ZV007pcfVvgRBdnkAqr4gl581JRzxxc13F3O06WJlyg== X-Received: by 2002:a67:ebd7:0:b0:44d:4553:4fd8 with SMTP id y23-20020a67ebd7000000b0044d45534fd8mr2148296vso.18.1692643071064; Mon, 21 Aug 2023 11:37:51 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id c16-20020a0cca10000000b0064b502fdeecsm2625786qvk.68.2023.08.21.11.37.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:50 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 7/8] mm: page_alloc: fix freelist movement during block conversion Date: Mon, 21 Aug 2023 14:33:39 -0400 Message-ID: <20230821183733.106619-8-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 27AB4140013 X-Stat-Signature: n7ffexum3fqs8k8se4muktni9f3kd31c X-HE-Tag: 1692643071-403650 X-HE-Meta: U2FsdGVkX1/jGJakY3rdUNidmKWtiSfEGwB7Qt/qGHQ/jQhbO1tJh4kMQtl7q1iG/7xtGZ5bWffhB/vWQU3jALNLoTXRnFz06gPfbub9XGkLUlaQepxy7m5klR1KKAUU+6x5F6YI33aMgNB6Zh8RbhtLv/L7LIKfMsY0LQcY+UkgpeiiRHliqMLhE9hckEIxbChG9mJgs/lZZ8jKGYuJKVQh6yriu+4akgGj0YxT/70pT1b9W1JBzs5UN9AKVNwf5pnWf6IfNArrM7G14/rLDqeiWMAFWEZwPadK2KUs1IEgA6ZiYfOq1fJCOsm68wqrXRCjpuJrvNwIGRJOHY/7rWI8MtAoMs419iHSGtbDPS4I9FMqApUe6z5UBhtpGF/6ID+8QSK8ECH9uhzlXeQGk1xnH6KMt1ofFELKP8kjnbLsQ4WLBpUN8XYL5qbdWY8LVNBC1WSv604T3iFAdV49BioCIp1a+yxRGPG4nCgj9vWONlGmapw8kIZYPnDv2e5mqIcdhEaWW9UkphVSRClgP+7Nrd5EALRW9zDjYmQbCYImC+GVUG/Tl6reTi7CAjmX53Q3hYWSE724JeW9h4TAMCQh5TrZ4lHQusvj4WZwsB5q2Xfd3RLcozWgXP58z5mHj3Fb00zlz7zdOHYK+YcYPYUzIpHYT4m8IOzj2fp8Rp/ZvJ/kzREtikJTCJzmgyEnFoa8STc3wWbg6JKyHcXVsVdy0eS5j9qk2YR4a0I+5fU/xktAFy3RS6xe7lU7lZbAiGkNO+vc973AhdCqn2fOymmT8WHVdnQYq9tSJ55mrw1jS3L0NgAtPZ71EQs7hv5PnMhaPYqz7gpn0QKkFJTSiXz/ENz3hw7wHy4ohAUgykSuJWnVqpAU9PbxJD+5HssDZZAAkEcw4G5DndC+MVFqtD5PjSjpnhVdBY+m74FKzP5AvdxOK8qIlztVRORMb+9UQvC21cAcxj1DWcqjkXg 9kMFmIxI MXcjNvrnxQh6KwZ8ZYobNaAbR1Ayn+11sSkVwFpLznlx+dbtHoqRxSow3VTcUDCSNNPhAJQaXvgKGeu2sBv3rk9x7PgkUKBib9ympGmuEA5GIAqCAyQI4aXWGdD+4HoBTyz7XfD5r1K8/TbmTVjQKpauvec5UmliibAYbO+H6ziEnD/GSYuLZqe/iX1AI277KOqjF/mwRjxUTU8+n3PCuSTtpAtU/kFhzDxU/nQ53+FS9/h4ZC/46T/XdJOYNLADbhHNssg342k9qtbXCXirGI7O/rJ9YcK+kFMTbG2AoXMRWVYzttffOKoHByzopOhVYr4pQ8kdhjSW1R3A4g/XJatx9H3/WQkBBuh7Jytg1Fae2UTYvPOj3Yf7tkde55LoL7CObl2em7jXb25obm+rxBJjM1HO5PqEP8ItutuxH5TTXkO8aNZ/Fo6bmDK+Au89YKlfgnV3cH9o7OiiMAbNGEQHpr3Gop8YlWcVzYHiZu5UPWpE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, page block type conversion during fallbacks, atomic reservations and isolation can strand various amounts of free pages on incorrect freelists. For example, fallback stealing moves free pages in the block to the new type's freelists, but then may not actually claim the block for that type if there aren't enough compatible pages already allocated. In all cases, free page moving might fail if the block straddles more than one zone, in which case no free pages are moved at all, but the block type is changed anyway. This is detrimental to type hygiene on the freelists. It encourages incompatible page mixing down the line (ask for one type, get another) and thus contributes to long-term fragmentation. Split the process into a proper transaction: check first if conversion will happen, then try to move the free pages, and only if that was successful convert the block to the new type. Signed-off-by: Johannes Weiner --- include/linux/page-isolation.h | 3 +- mm/page_alloc.c | 176 ++++++++++++++++++++------------- mm/page_isolation.c | 22 +++-- 3 files changed, 121 insertions(+), 80 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 4ac34392823a..8550b3c91480 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -34,8 +34,7 @@ static inline bool is_migrate_isolate(int migratetype) #define REPORT_FAILURE 0x2 void set_pageblock_migratetype(struct page *page, int migratetype); -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable); +int move_freepages_block(struct zone *zone, struct page *page, int migratetype); int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int migratetype, int flags, gfp_t gfp_flags); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6fcda8e96f16..42b62832323f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1646,9 +1646,8 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ -static int move_freepages(struct zone *zone, - unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int *num_movable) +static int move_freepages(struct zone *zone, unsigned long start_pfn, + unsigned long end_pfn, int migratetype) { struct page *page; unsigned long pfn; @@ -1658,14 +1657,6 @@ static int move_freepages(struct zone *zone, for (pfn = start_pfn; pfn <= end_pfn;) { page = pfn_to_page(pfn); if (!PageBuddy(page)) { - /* - * We assume that pages that could be isolated for - * migration are movable. But we don't actually try - * isolating, as that would be expensive. - */ - if (num_movable && - (PageLRU(page) || __PageMovable(page))) - (*num_movable)++; pfn++; continue; } @@ -1683,26 +1674,62 @@ static int move_freepages(struct zone *zone, return pages_moved; } -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable) +static bool prep_move_freepages_block(struct zone *zone, struct page *page, + unsigned long *start_pfn, + unsigned long *end_pfn, + int *num_free, int *num_movable) { - unsigned long start_pfn, end_pfn, pfn; - - if (num_movable) - *num_movable = 0; + unsigned long pfn, start, end; pfn = page_to_pfn(page); - start_pfn = pageblock_start_pfn(pfn); - end_pfn = pageblock_end_pfn(pfn) - 1; + start = pageblock_start_pfn(pfn); + end = pageblock_end_pfn(pfn) - 1; /* Do not cross zone boundaries */ - if (!zone_spans_pfn(zone, start_pfn)) - start_pfn = zone->zone_start_pfn; - if (!zone_spans_pfn(zone, end_pfn)) - return 0; + if (!zone_spans_pfn(zone, start)) + start = zone->zone_start_pfn; + if (!zone_spans_pfn(zone, end)) + return false; + + *start_pfn = start; + *end_pfn = end; + + if (num_free) { + *num_free = 0; + *num_movable = 0; + for (pfn = start; pfn <= end;) { + page = pfn_to_page(pfn); + if (PageBuddy(page)) { + int nr = 1 << buddy_order(page); + + *num_free += nr; + pfn += nr; + continue; + } + /* + * We assume that pages that could be isolated for + * migration are movable. But we don't actually try + * isolating, as that would be expensive. + */ + if (PageLRU(page) || __PageMovable(page)) + (*num_movable)++; + pfn++; + } + } + + return true; +} - return move_freepages(zone, start_pfn, end_pfn, migratetype, - num_movable); +int move_freepages_block(struct zone *zone, struct page *page, + int migratetype) +{ + unsigned long start_pfn, end_pfn; + + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, + NULL, NULL)) + return -1; + + return move_freepages(zone, start_pfn, end_pfn, migratetype); } /* @@ -1776,33 +1803,36 @@ static inline bool boost_watermark(struct zone *zone) } /* - * This function implements actual steal behaviour. If order is large enough, - * we can steal whole pageblock. If not, we first move freepages in this - * pageblock to our migratetype and determine how many already-allocated pages - * are there in the pageblock with a compatible migratetype. If at least half - * of pages are free or compatible, we can change migratetype of the pageblock - * itself, so pages freed in the future will be put on the correct free list. + * This function implements actual steal behaviour. If order is large enough, we + * can claim the whole pageblock for the requested migratetype. If not, we check + * the pageblock for constituent pages; if at least half of the pages are free + * or compatible, we can still claim the whole block, so pages freed in the + * future will be put on the correct free list. Otherwise, we isolate exactly + * the order we need from the fallback block and leave its migratetype alone. */ static void steal_suitable_fallback(struct zone *zone, struct page *page, - unsigned int alloc_flags, int start_type, bool whole_block) + int current_order, int order, int start_type, + unsigned int alloc_flags, bool whole_block) { - unsigned int current_order = buddy_order(page); int free_pages, movable_pages, alike_pages; - int old_block_type; + unsigned long start_pfn, end_pfn; + int block_type; - old_block_type = get_pageblock_migratetype(page); + block_type = get_pageblock_migratetype(page); /* * This can happen due to races and we want to prevent broken * highatomic accounting. */ - if (is_migrate_highatomic(old_block_type)) + if (is_migrate_highatomic(block_type)) goto single_page; /* Take ownership for orders >= pageblock_order */ if (current_order >= pageblock_order) { + del_page_from_free_list(page, zone, current_order); change_pageblock_range(page, current_order, start_type); - goto single_page; + expand(zone, page, order, current_order, start_type); + return; } /* @@ -1817,8 +1847,11 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, if (!whole_block) goto single_page; - free_pages = move_freepages_block(zone, page, start_type, - &movable_pages); + /* moving whole block can fail due to zone boundary conditions */ + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, + &free_pages, &movable_pages)) + goto single_page; + /* * Determine how many pages are compatible with our allocation. * For movable allocation, it's the number of movable pages which @@ -1834,29 +1867,27 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, * vice versa, be conservative since we can't distinguish the * exact migratetype of non-movable pages. */ - if (old_block_type == MIGRATE_MOVABLE) + if (block_type == MIGRATE_MOVABLE) alike_pages = pageblock_nr_pages - (free_pages + movable_pages); else alike_pages = 0; } - /* moving whole block can fail due to zone boundary conditions */ - if (!free_pages) - goto single_page; - /* * If a sufficient number of pages in the block are either free or of * comparable migratability as our allocation, claim the whole block. */ if (free_pages + alike_pages >= (1 << (pageblock_order-1)) || - page_group_by_mobility_disabled) + page_group_by_mobility_disabled) { + move_freepages(zone, start_pfn, end_pfn, start_type); set_pageblock_migratetype(page, start_type); - - return; + block_type = start_type; + } single_page: - move_to_free_list(page, zone, current_order, start_type); + del_page_from_free_list(page, zone, current_order); + expand(zone, page, order, current_order, block_type); } /* @@ -1921,9 +1952,10 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, mt = get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (migratetype_is_mergeable(mt)) { - zone->nr_reserved_highatomic += pageblock_nr_pages; - set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); - move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); + if (move_freepages_block(zone, page, MIGRATE_HIGHATOMIC) != -1) { + set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); + zone->nr_reserved_highatomic += pageblock_nr_pages; + } } out_unlock: @@ -1948,7 +1980,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, struct zone *zone; struct page *page; int order; - bool ret; + int ret; for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->highest_zoneidx, ac->nodemask) { @@ -1997,10 +2029,14 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * of pageblocks that cannot be completely freed * may increase. */ + ret = move_freepages_block(zone, page, ac->migratetype); + /* + * Reserving this block already succeeded, so this should + * not fail on zone boundaries. + */ + WARN_ON_ONCE(ret == -1); set_pageblock_migratetype(page, ac->migratetype); - ret = move_freepages_block(zone, page, ac->migratetype, - NULL); - if (ret) { + if (ret > 0) { spin_unlock_irqrestore(&zone->lock, flags); return ret; } @@ -2021,7 +2057,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * deviation from the rest of this file, to make the for loop * condition simpler. */ -static __always_inline bool +static __always_inline struct page * __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, unsigned int alloc_flags) { @@ -2068,7 +2104,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, goto do_steal; } - return false; + return NULL; find_smallest: for (current_order = order; current_order <= MAX_ORDER; @@ -2089,13 +2125,14 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, do_steal: page = get_page_from_free_area(area, fallback_mt); - steal_suitable_fallback(zone, page, alloc_flags, start_migratetype, - can_steal); + /* take off list, maybe claim block, expand remainder */ + steal_suitable_fallback(zone, page, current_order, order, + start_migratetype, alloc_flags, can_steal); trace_mm_page_alloc_extfrag(page, order, current_order, start_migratetype, fallback_mt); - return true; + return page; } @@ -2123,15 +2160,14 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, return page; } } -retry: + page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { if (alloc_flags & ALLOC_CMA) page = __rmqueue_cma_fallback(zone, order); - - if (!page && __rmqueue_fallback(zone, order, migratetype, - alloc_flags)) - goto retry; + else + page = __rmqueue_fallback(zone, order, migratetype, + alloc_flags); } return page; } @@ -2586,12 +2622,10 @@ int __isolate_free_page(struct page *page, unsigned int order) * Only change normal pageblocks (i.e., they can merge * with others) */ - if (migratetype_is_mergeable(mt)) { - set_pageblock_migratetype(page, - MIGRATE_MOVABLE); - move_freepages_block(zone, page, - MIGRATE_MOVABLE, NULL); - } + if (migratetype_is_mergeable(mt) && + move_freepages_block(zone, page, + MIGRATE_MOVABLE) != -1) + set_pageblock_migratetype(page, MIGRATE_MOVABLE); } } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 6599cc965e21..f5e4d8676b36 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -178,15 +178,18 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end, migratetype, isol_flags); if (!unmovable) { - unsigned long nr_pages; + int nr_pages; int mt = get_pageblock_migratetype(page); + nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE); + /* Block spans zone boundaries? */ + if (nr_pages == -1) { + spin_unlock_irqrestore(&zone->lock, flags); + return -EBUSY; + } + __mod_zone_freepage_state(zone, -nr_pages, mt); set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; - nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE, - NULL); - - __mod_zone_freepage_state(zone, -nr_pages, mt); spin_unlock_irqrestore(&zone->lock, flags); return 0; } @@ -206,7 +209,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ static void unset_migratetype_isolate(struct page *page, int migratetype) { struct zone *zone; - unsigned long flags, nr_pages; + unsigned long flags; bool isolated_page = false; unsigned int order; struct page *buddy; @@ -252,7 +255,12 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) * allocation. */ if (!isolated_page) { - nr_pages = move_freepages_block(zone, page, migratetype, NULL); + int nr_pages = move_freepages_block(zone, page, migratetype); + /* + * Isolating this block already succeeded, so this + * should not fail on zone boundaries. + */ + WARN_ON_ONCE(nr_pages == -1); __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); From patchwork Mon Aug 21 18:33:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13359744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A28EEE4996 for ; Mon, 21 Aug 2023 18:38:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A600F90000B; Mon, 21 Aug 2023 14:37:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9EC19900005; Mon, 21 Aug 2023 14:37:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8147390000B; Mon, 21 Aug 2023 14:37:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 640DC900005 for ; Mon, 21 Aug 2023 14:37:55 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 33DA9C0CD6 for ; Mon, 21 Aug 2023 18:37:55 +0000 (UTC) X-FDA: 81148970910.24.0AF1FA2 Received: from mail-qk1-f178.google.com (mail-qk1-f178.google.com [209.85.222.178]) by imf09.hostedemail.com (Postfix) with ESMTP id 46433140031 for ; Mon, 21 Aug 2023 18:37:53 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=Hv6JrnJ6; spf=pass (imf09.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.178 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692643073; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KxZ7t08TIrc+nfomCwl0X3uqrfJJ7PwywgjcrMZlbi4=; b=65F85nitw2B/ec+1Ekds+8WpdfYjKsKjL5XRnXUV6e5YutFoGqzTEeNYbx3G9dJvJ0Xguv vSdpUKF+bc4kREl7tgaU4YMrc/4OlOUGOgag3ukDEHwH+/NqoomsnH+I+IUHNsZQAk2aFI wqBnpM7S8S8DeYE6Jq8+yMb/sj2LLO4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692643073; a=rsa-sha256; cv=none; b=wqr6BVGwATZtPOyv2KmDx0b00JNPKYn1cy5NtllwQxdatY/hbaQtVzXfQjCNct8lXDCWVA CEH860Vs/4cr1Puds+wqdX5wLZg2F2QinGMY7w0rESzmD5lj3Jv2VBuoMJbceUVaXKU0V7 9t1nFp9GGyNZl19hbiJvsROZyufp5jU= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=Hv6JrnJ6; spf=pass (imf09.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.178 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qk1-f178.google.com with SMTP id af79cd13be357-76d8f2be749so227499485a.3 for ; Mon, 21 Aug 2023 11:37:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1692643072; x=1693247872; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KxZ7t08TIrc+nfomCwl0X3uqrfJJ7PwywgjcrMZlbi4=; b=Hv6JrnJ6V8xuEDJa3YO+/EAdKgVXjUawjQKVhtDEApvSkf61q9dP+cH0fMYe//Q6zE hGU8CwcaZ0GrasBu602xcZAM1F3oD18xLDTx9i8YuzkFW4j2/Ifokl44IiOM6yi1IGVx FSNA9GqWSffR/n2ILgpDYKnD9UlztfhC4NUmWu3FML/ayyN4rKP5NPlM7t99J+p6Dwqj Tmvs+6gxkJqwN+yzbbK3P6eStBj8lWYzEszbNyxh3GDwqyvJd5YIIVv0h3wHg9O8zJUi QG9KJ2AFzvEWI+WaN8MIhqeyMdkwB1nYuiHU/QvtBr9MBVT0WUDH5G4C4LhL0LXOHih8 wVDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692643072; x=1693247872; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KxZ7t08TIrc+nfomCwl0X3uqrfJJ7PwywgjcrMZlbi4=; b=DRMBhbh/JE1Jbuf5R0mKTLzD3ONdG+PcuEGpBOX2J7kBLNV5l8+IxIRWlILa7XH4RB Yp4360BHaVZz6cVNZ9dM3QWEhtHE33kCm+ujP/blologXiElG0onCIWbaD2VD5d8BbK+ lcoMTI5yOiCjVy/Tfb2X8ThzBw2QaNhdLaH3/DU2K/S6gLAGRvnf/eis1CEmsPLLdKZ/ gouD5ZFrzCBq94BGBmNsbqbdH3VEnDypXs1O3UY0pCnS4/gprpZop/3FIW8zzvSCHjkV fGiPvrsycizPZmOt2RKt0EX8WB9jXN2RbTLHRvljddPKlqPIxoEE4mla9+WdPvOcEc9c iaMQ== X-Gm-Message-State: AOJu0YyrIBsrrPnfDWDcQkiZjskOGrBlA++HBCi/yHkxx3O4mkjcAthA mDllysK0tmH/Y/tiDTvs/xpma6KW5u28/wNaQpNhGg== X-Google-Smtp-Source: AGHT+IGf/VsA5bYRO1YSeXVGrKwk/5rrwKmMa2FfmY7ONUlt87KHnSp0Wr2la0XDsPTpH9xoZBIv3A== X-Received: by 2002:a05:620a:45ab:b0:76c:7f5e:3888 with SMTP id bp43-20020a05620a45ab00b0076c7f5e3888mr10747099qkb.50.1692643072396; Mon, 21 Aug 2023 11:37:52 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-699c-6fe1-d2a8-6a30.res6.spectrum.com. [2603:7000:c01:2716:699c:6fe1:d2a8:6a30]) by smtp.gmail.com with ESMTPSA id c14-20020a05620a11ae00b0075b2af4a076sm2664314qkk.16.2023.08.21.11.37.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Aug 2023 11:37:52 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 8/8] mm: page_alloc: consolidate free page accounting Date: Mon, 21 Aug 2023 14:33:40 -0400 Message-ID: <20230821183733.106619-9-hannes@cmpxchg.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821183733.106619-1-hannes@cmpxchg.org> References: <20230821183733.106619-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 46433140031 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 9hpm19xyywzmeetmxmgocs1haomkmhnp X-HE-Tag: 1692643073-60063 X-HE-Meta: U2FsdGVkX1/2vl2TTNZ/dsNrSNIVKRT+MiEzYUFrqKy+qPmXDNyJek3n979FCKZc33W7mhmmJlMPvydEUggbQQrhJ32u76PERjiBKHKz9jzmyrhi8vhHFGdZLPfCgF1XWaO0ICu4TZfbFOI9Z4HZx8EMyvHUeMoZJLOLbQdxyzbRHRlsJXuKYBX1mvRCZ83o2yx3yx4/IWdonx5kO4ZHr13hrirl13lY2QOVk1ZqIfVSEut6qIMHYUl4itZm+LTbdHnXY8qPfOd28G26oItWenJgYQqBdqyTfugjxHyxTmEcmIJO469zUiZqlPlu4ValM8UKD5X+ydn9jxH6dB51sO789ycxdg2ETKg81w4o8d5jayFXoett7fJMpLgcWKplZBhio0qX2F3sr2NDYLZ13Wm3BzMpJteyBVcExKBglUv5KyWudRaS1eT5S58aVeWo0JoGIu875e858KFpFRxrfIPX1+wIgu+UttWUPGX99Gx58LygVsRrzMiOvaBoW933TGLp6AItJSQK+eRUL25mXFS/LtMqeQjNOcXe1TIUAf8jhM9R92M0NcIBoXIk/0oRCsVN/BU4A7IyRWR9yFfALcYbaCqXXYVDmZ28sqW0y/9fE9akyGAzKxFCum3bToCVXbEoWmDIGkOLH69RZ+DMfkdpq8JjkcpJTBAjwgCHJbiY8UDVk6npoo2JLB6VOD1oAdJ3XPOlVlMC8K+ESPE0hykiYOcNeJvzIbJvsF/f0ldOJCNOTbYHdTcwU24fjL7Px74YN2X9fC5jnU9VV/q560Dsff1Mu5j99YTCCAtsvaSFUgADckwvp6O6TeNyLxIVLfcyMq0RSvnALtPH5qNHpmKZs4xYPOUxL4liS50kEXTIlui1ED1RI3bEWkw3XvVx9B2hOZFsWcmqVs+2Opq8Hl8pkfj5s7HQwGe/zMxp757db6OLyYc5i1TA97wdyMvLSqNhFW2oAhcpjMwthmS AH+XXz6W u5iA3zTr0eGizzoDzVo7KRHjhQLgqDj2/u/Ry012pHFpMmpJnp6sKNTGriaQtM0XrJcAho9cGfssNidDYQOMfaednAmxCokNSk5W+bVzmEwgkS60f++S55SSKhCjpdne8A6pSP09brNoqKsBa2ZIgHx2i38W5Ny14UOXBjsH41SLqrbjRFwPPtVTXYfm6hIhgTuAjitXR+tPeZACuIoaTybqWdrU0wSQpkpsQoBw2XygjY/cQB/Jwj9sw2sQPrKAvyPTYdlmkvczaN5WLU2nKIfBN1Sv/q6xZOvFLM/D8Zhnso31wmjv3b83lmmBYzqH0mVG883CIeVFbLo3Hy5kdF9g6rBOOAqy3mqiKq2Fu46qZ2ycQ1CYMAIRIdiOeXceGMFRW9S0b3x+UfIV7ezsodloenE8z+w/2RreMNtn6LvfN+BrlzBn+uXp+yMfJve/IcfRe8g8uneqT/8dhz44jfyEhfWzJ7eVshMzNzSp/B9SN5ME= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Free page accounting currently happens a bit too high up the call stack, where it has to deal with guard pages, compaction capturing, block stealing and even page isolation. This is subtle and fragile, and makes it difficult to hack on the code. Push the accounting down to where pages enter and leave the physical freelists, where all these higher-level exceptions are of no concern. v2: - fix CONFIG_DEBUG_PAGEALLOC build (Mel) Signed-off-by: Johannes Weiner --- include/linux/mm.h | 18 ++--- include/linux/page-isolation.h | 3 +- include/linux/vmstat.h | 8 -- mm/debug_page_alloc.c | 12 +-- mm/internal.h | 5 -- mm/page_alloc.c | 131 ++++++++++++++++++--------------- mm/page_isolation.c | 7 +- 7 files changed, 88 insertions(+), 96 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 406ab9ea818f..950c400ac53b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3550,24 +3550,22 @@ static inline bool page_is_guard(struct page *page) return PageGuard(page); } -bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype); +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order); static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) + unsigned int order) { if (!debug_guardpage_enabled()) return false; - return __set_page_guard(zone, page, order, migratetype); + return __set_page_guard(zone, page, order); } -void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype); +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order); static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) + unsigned int order) { if (!debug_guardpage_enabled()) return; - __clear_page_guard(zone, page, order, migratetype); + __clear_page_guard(zone, page, order); } #else /* CONFIG_DEBUG_PAGEALLOC */ @@ -3577,9 +3575,9 @@ static inline unsigned int debug_guardpage_minorder(void) { return 0; } static inline bool debug_guardpage_enabled(void) { return false; } static inline bool page_is_guard(struct page *page) { return false; } static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) { return false; } + unsigned int order) { return false; } static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) {} + unsigned int order) {} #endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef __HAVE_ARCH_GATE_AREA diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 8550b3c91480..901915747960 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -34,7 +34,8 @@ static inline bool is_migrate_isolate(int migratetype) #define REPORT_FAILURE 0x2 void set_pageblock_migratetype(struct page *page, int migratetype); -int move_freepages_block(struct zone *zone, struct page *page, int migratetype); +int move_freepages_block(struct zone *zone, struct page *page, + int old_mt, int new_mt); int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int migratetype, int flags, gfp_t gfp_flags); diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index fed855bae6d8..a4eae03f6094 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -487,14 +487,6 @@ static inline void node_stat_sub_folio(struct folio *folio, mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); } -static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages, - int migratetype) -{ - __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages); - if (is_migrate_cma(migratetype)) - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages); -} - extern const char * const vmstat_text[]; static inline const char *zone_stat_name(enum zone_stat_item item) diff --git a/mm/debug_page_alloc.c b/mm/debug_page_alloc.c index f9d145730fd1..03a810927d0a 100644 --- a/mm/debug_page_alloc.c +++ b/mm/debug_page_alloc.c @@ -32,8 +32,7 @@ static int __init debug_guardpage_minorder_setup(char *buf) } early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); -bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype) +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order) { if (order >= debug_guardpage_minorder()) return false; @@ -41,19 +40,12 @@ bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, __SetPageGuard(page); INIT_LIST_HEAD(&page->buddy_list); set_page_private(page, order); - /* Guard pages are not available for any usage */ - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -(1 << order), migratetype); return true; } -void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, - int migratetype) +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order) { __ClearPageGuard(page); - set_page_private(page, 0); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, (1 << order), migratetype); } diff --git a/mm/internal.h b/mm/internal.h index a7d9e980429a..d86fd621880e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -865,11 +865,6 @@ static inline bool is_migrate_highatomic(enum migratetype migratetype) return migratetype == MIGRATE_HIGHATOMIC; } -static inline bool is_migrate_highatomic_page(struct page *page) -{ - return get_pageblock_migratetype(page) == MIGRATE_HIGHATOMIC; -} - void setup_zone_pageset(struct zone *zone); struct migration_target_control { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 42b62832323f..e7e790a64237 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -676,24 +676,36 @@ compaction_capture(struct capture_control *capc, struct page *page, } #endif /* CONFIG_COMPACTION */ -/* Used for pages not on another list */ -static inline void add_to_free_list(struct page *page, struct zone *zone, - unsigned int order, int migratetype) +static inline void account_freepages(struct page *page, struct zone *zone, + int nr_pages, int migratetype) { - struct free_area *area = &zone->free_area[order]; + if (is_migrate_isolate(migratetype)) + return; - list_add(&page->buddy_list, &area->free_list[migratetype]); - area->nr_free++; + __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages); + + if (is_migrate_cma(migratetype)) + __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages); } /* Used for pages not on another list */ -static inline void add_to_free_list_tail(struct page *page, struct zone *zone, - unsigned int order, int migratetype) +static inline void add_to_free_list(struct page *page, struct zone *zone, + unsigned int order, int migratetype, + bool tail) { struct free_area *area = &zone->free_area[order]; - list_add_tail(&page->buddy_list, &area->free_list[migratetype]); + VM_WARN_ONCE(get_pageblock_migratetype(page) != migratetype, + "page type is %lu, passed migratetype is %d (nr=%d)\n", + get_pageblock_migratetype(page), migratetype, 1 << order); + + if (tail) + list_add_tail(&page->buddy_list, &area->free_list[migratetype]); + else + list_add(&page->buddy_list, &area->free_list[migratetype]); area->nr_free++; + + account_freepages(page, zone, 1 << order, migratetype); } /* @@ -702,16 +714,28 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, * allocation again (e.g., optimization for memory onlining). */ static inline void move_to_free_list(struct page *page, struct zone *zone, - unsigned int order, int migratetype) + unsigned int order, int old_mt, int new_mt) { struct free_area *area = &zone->free_area[order]; - list_move_tail(&page->buddy_list, &area->free_list[migratetype]); + /* Free page moving can fail, so it happens before the type update */ + VM_WARN_ONCE(get_pageblock_migratetype(page) != old_mt, + "page type is %lu, passed migratetype is %d (nr=%d)\n", + get_pageblock_migratetype(page), old_mt, 1 << order); + + list_move_tail(&page->buddy_list, &area->free_list[new_mt]); + + account_freepages(page, zone, -(1 << order), old_mt); + account_freepages(page, zone, 1 << order, new_mt); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, - unsigned int order) + unsigned int order, int migratetype) { + VM_WARN_ONCE(get_pageblock_migratetype(page) != migratetype, + "page type is %lu, passed migratetype is %d (nr=%d)\n", + get_pageblock_migratetype(page), migratetype, 1 << order); + /* clear reported state and update reported page count */ if (page_reported(page)) __ClearPageReported(page); @@ -720,6 +744,8 @@ static inline void del_page_from_free_list(struct page *page, struct zone *zone, __ClearPageBuddy(page); set_page_private(page, 0); zone->free_area[order].nr_free--; + + account_freepages(page, zone, -(1 << order), migratetype); } static inline struct page *get_page_from_free_area(struct free_area *area, @@ -793,23 +819,21 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); VM_BUG_ON(migratetype == -1); - if (likely(!is_migrate_isolate(migratetype))) - __mod_zone_freepage_state(zone, 1 << order, migratetype); - VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); while (order < MAX_ORDER) { - if (compaction_capture(capc, page, order, migratetype)) { - __mod_zone_freepage_state(zone, -(1 << order), - migratetype); + int buddy_mt; + + if (compaction_capture(capc, page, order, migratetype)) return; - } buddy = find_buddy_page_pfn(page, pfn, order, &buddy_pfn); if (!buddy) goto done_merging; + buddy_mt = get_pfnblock_migratetype(buddy, buddy_pfn); + if (unlikely(order >= pageblock_order)) { /* * We want to prevent merge between freepages on pageblock @@ -837,9 +861,9 @@ static inline void __free_one_page(struct page *page, * merge with it and move up one order. */ if (page_is_guard(buddy)) - clear_page_guard(zone, buddy, order, migratetype); + clear_page_guard(zone, buddy, order); else - del_page_from_free_list(buddy, zone, order); + del_page_from_free_list(buddy, zone, order, buddy_mt); combined_pfn = buddy_pfn & pfn; page = page + (combined_pfn - pfn); pfn = combined_pfn; @@ -856,10 +880,7 @@ static inline void __free_one_page(struct page *page, else to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); - if (to_tail) - add_to_free_list_tail(page, zone, order, migratetype); - else - add_to_free_list(page, zone, order, migratetype); + add_to_free_list(page, zone, order, migratetype, to_tail); /* Notify page reporting subsystem of freed page */ if (!(fpi_flags & FPI_SKIP_REPORT_NOTIFY)) @@ -901,10 +922,8 @@ int split_free_page(struct page *free_page, } mt = get_pfnblock_migratetype(free_page, free_page_pfn); - if (likely(!is_migrate_isolate(mt))) - __mod_zone_freepage_state(zone, -(1UL << order), mt); + del_page_from_free_list(free_page, zone, order, mt); - del_page_from_free_list(free_page, zone, order); for (pfn = free_page_pfn; pfn < free_page_pfn + (1UL << order);) { int mt = get_pfnblock_migratetype(pfn_to_page(pfn), pfn); @@ -1433,10 +1452,10 @@ static inline void expand(struct zone *zone, struct page *page, * Corresponding page table entries will not be touched, * pages will stay not present in virtual address space */ - if (set_page_guard(zone, &page[size], high, migratetype)) + if (set_page_guard(zone, &page[size], high)) continue; - add_to_free_list(&page[size], zone, high, migratetype); + add_to_free_list(&page[size], zone, high, migratetype, false); set_buddy_order(&page[size], high); } } @@ -1606,7 +1625,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, page = get_page_from_free_area(area, migratetype); if (!page) continue; - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, migratetype); expand(zone, page, order, current_order, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, pcp_allowed_order(order) && @@ -1647,7 +1666,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, * boundary. If alignment is required, use move_freepages_block() */ static int move_freepages(struct zone *zone, unsigned long start_pfn, - unsigned long end_pfn, int migratetype) + unsigned long end_pfn, int old_mt, int new_mt) { struct page *page; unsigned long pfn; @@ -1666,7 +1685,7 @@ static int move_freepages(struct zone *zone, unsigned long start_pfn, VM_BUG_ON_PAGE(page_zone(page) != zone, page); order = buddy_order(page); - move_to_free_list(page, zone, order, migratetype); + move_to_free_list(page, zone, order, old_mt, new_mt); pfn += 1 << order; pages_moved += 1 << order; } @@ -1721,7 +1740,7 @@ static bool prep_move_freepages_block(struct zone *zone, struct page *page, } int move_freepages_block(struct zone *zone, struct page *page, - int migratetype) + int old_mt, int new_mt) { unsigned long start_pfn, end_pfn; @@ -1729,7 +1748,7 @@ int move_freepages_block(struct zone *zone, struct page *page, NULL, NULL)) return -1; - return move_freepages(zone, start_pfn, end_pfn, migratetype); + return move_freepages(zone, start_pfn, end_pfn, old_mt, new_mt); } /* @@ -1829,7 +1848,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, /* Take ownership for orders >= pageblock_order */ if (current_order >= pageblock_order) { - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, block_type); change_pageblock_range(page, current_order, start_type); expand(zone, page, order, current_order, start_type); return; @@ -1880,13 +1899,13 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, */ if (free_pages + alike_pages >= (1 << (pageblock_order-1)) || page_group_by_mobility_disabled) { - move_freepages(zone, start_pfn, end_pfn, start_type); + move_freepages(zone, start_pfn, end_pfn, block_type, start_type); set_pageblock_migratetype(page, start_type); block_type = start_type; } single_page: - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, block_type); expand(zone, page, order, current_order, block_type); } @@ -1952,7 +1971,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, mt = get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (migratetype_is_mergeable(mt)) { - if (move_freepages_block(zone, page, MIGRATE_HIGHATOMIC) != -1) { + if (move_freepages_block(zone, page, + mt, MIGRATE_HIGHATOMIC) != -1) { set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); zone->nr_reserved_highatomic += pageblock_nr_pages; } @@ -1995,11 +2015,13 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, spin_lock_irqsave(&zone->lock, flags); for (order = 0; order <= MAX_ORDER; order++) { struct free_area *area = &(zone->free_area[order]); + int mt; page = get_page_from_free_area(area, MIGRATE_HIGHATOMIC); if (!page) continue; + mt = get_pageblock_migratetype(page); /* * In page freeing path, migratetype change is racy so * we can counter several free pages in a pageblock @@ -2007,7 +2029,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * from highatomic to ac->migratetype. So we should * adjust the count once. */ - if (is_migrate_highatomic_page(page)) { + if (is_migrate_highatomic(mt)) { /* * It should never happen but changes to * locking could inadvertently allow a per-cpu @@ -2029,7 +2051,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * of pageblocks that cannot be completely freed * may increase. */ - ret = move_freepages_block(zone, page, ac->migratetype); + ret = move_freepages_block(zone, page, mt, + ac->migratetype); /* * Reserving this block already succeeded, so this should * not fail on zone boundaries. @@ -2202,12 +2225,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, * pages are ordered properly. */ list_add_tail(&page->pcp_list, list); - if (is_migrate_cma(get_pageblock_migratetype(page))) - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, - -(1 << order)); } - - __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order)); spin_unlock_irqrestore(&zone->lock, flags); return i; @@ -2604,11 +2622,9 @@ int __isolate_free_page(struct page *page, unsigned int order) watermark = zone->_watermark[WMARK_MIN] + (1UL << order); if (!zone_watermark_ok(zone, 0, watermark, 0, ALLOC_CMA)) return 0; - - __mod_zone_freepage_state(zone, -(1UL << order), mt); } - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, mt); /* * Set the pageblock if the isolated page is at least half of a @@ -2623,7 +2639,7 @@ int __isolate_free_page(struct page *page, unsigned int order) * with others) */ if (migratetype_is_mergeable(mt) && - move_freepages_block(zone, page, + move_freepages_block(zone, page, mt, MIGRATE_MOVABLE) != -1) set_pageblock_migratetype(page, MIGRATE_MOVABLE); } @@ -2715,8 +2731,6 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, return NULL; } } - __mod_zone_freepage_state(zone, -(1 << order), - get_pageblock_migratetype(page)); spin_unlock_irqrestore(&zone->lock, flags); } while (check_new_pages(page, order)); @@ -6488,8 +6502,9 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) BUG_ON(page_count(page)); BUG_ON(!PageBuddy(page)); + VM_WARN_ON(get_pageblock_migratetype(page) != MIGRATE_ISOLATE); order = buddy_order(page); - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE); pfn += (1 << order); } spin_unlock_irqrestore(&zone->lock, flags); @@ -6540,11 +6555,12 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page, current_buddy = page + size; } - if (set_page_guard(zone, current_buddy, high, migratetype)) + if (set_page_guard(zone, current_buddy, high)) continue; if (current_buddy != target) { - add_to_free_list(current_buddy, zone, high, migratetype); + add_to_free_list(current_buddy, zone, high, + migratetype, false); set_buddy_order(current_buddy, high); page = next_page; } @@ -6572,12 +6588,11 @@ bool take_page_off_buddy(struct page *page) int migratetype = get_pfnblock_migratetype(page_head, pfn_head); - del_page_from_free_list(page_head, zone, page_order); + del_page_from_free_list(page_head, zone, page_order, + migratetype); break_down_buddy_pages(zone, page_head, page, 0, page_order, migratetype); SetPageHWPoisonTakenOff(page); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -1, migratetype); ret = true; break; } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index f5e4d8676b36..b0705e709973 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -181,13 +181,12 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ int nr_pages; int mt = get_pageblock_migratetype(page); - nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE); + nr_pages = move_freepages_block(zone, page, mt, MIGRATE_ISOLATE); /* Block spans zone boundaries? */ if (nr_pages == -1) { spin_unlock_irqrestore(&zone->lock, flags); return -EBUSY; } - __mod_zone_freepage_state(zone, -nr_pages, mt); set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; spin_unlock_irqrestore(&zone->lock, flags); @@ -255,13 +254,13 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) * allocation. */ if (!isolated_page) { - int nr_pages = move_freepages_block(zone, page, migratetype); + int nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE, + migratetype); /* * Isolating this block already succeeded, so this * should not fail on zone boundaries. */ WARN_ON_ONCE(nr_pages == -1); - __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); if (isolated_page)