From patchwork Fri Jun 14 01:51:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13697714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E732FC27C4F for ; Fri, 14 Jun 2024 01:54:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 59D3F6B009A; Thu, 13 Jun 2024 21:52:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54CDF6B009D; Thu, 13 Jun 2024 21:52:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EE226B00A0; Thu, 13 Jun 2024 21:52:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 238926B009A for ; Thu, 13 Jun 2024 21:52:22 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 91127803F7 for ; Fri, 14 Jun 2024 01:52:21 +0000 (UTC) X-FDA: 82227819282.27.84402C2 Received: from mail-io1-f43.google.com (mail-io1-f43.google.com [209.85.166.43]) by imf24.hostedemail.com (Postfix) with ESMTP id CADAD180007 for ; Fri, 14 Jun 2024 01:52:18 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HRM8RW8r; spf=pass (imf24.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.166.43 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718329937; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WFmbQRPvNrnsz2JY0mzqRjOemINJxJOX/iTkOKnCHs0=; b=Wj5DreS1C23gdxHoHMd4y+bjLjxvcb6TkTphLXJFL2T3HoET5u49RSQuqodOeF9HFoICPM KsxJvfKzzGRyFL1eoOjiL1hGdspyjl0U6I8undb0hK7arziGhoS2FxraS/0aUULIHLq+SY NGnsvTcBMRNulz/9dzJqfxsNLaEzCzk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718329937; a=rsa-sha256; cv=none; b=8SIXolGlSJ0/taTznIHPpmPi/8G0SFFNCWBNjrVguIHu/AmmULMMQpOQFn2O/5IBzJjA7H 3SILQPaeBuNILw91PCOcDRfSQo+7ypF/EtsZvD+UfhKNvN6a9gbrkEY8DgGwGGhtNGsDwe +sm0Xtc2ermTM8sC0IHfLZvY9n9WKRI= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HRM8RW8r; spf=pass (imf24.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.166.43 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-io1-f43.google.com with SMTP id ca18e2360f4ac-7ebdb5d9915so59196239f.1 for ; Thu, 13 Jun 2024 18:52:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718329938; x=1718934738; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WFmbQRPvNrnsz2JY0mzqRjOemINJxJOX/iTkOKnCHs0=; b=HRM8RW8rCJCwsHUQdaMRdrqOGUWwUemsFDq/CkVuYNaLm7SGh5t6dZcjoOiLz5Ywx/ 2Ireo0oRPHDKfokThMOWDc27B/b0dseJVPgh7cC7jXaBJIa5/zN7bB2v9sEgLdDNVs3l xAJeVIO90qAh+zCaQuEIe0Ac8lE0TtKIRp5ovUCQOdmPMzGtQeXDpKi2Q5CziGf/sZpw XrlOEdMftdDkky/NoRjbDR8jrHIqUfrFY/SqJrVg6DYLOGVwX2ezVQCYHb9VU7CLk/lc ZBo/bGW6gIgStR5YMv0/ReYh+pSc8pMRbekIdgyANbIP3BrNBykvJ4RVsBRwrlE02iBz sreQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718329938; x=1718934738; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WFmbQRPvNrnsz2JY0mzqRjOemINJxJOX/iTkOKnCHs0=; b=VQF+m6+sbg2Hui/tXzBpylShv8doqkhwuvoH4VvcLU3Siu2GE+ZkKltPW7wrwNVbK1 a3BxP3/3a7R9L3xO+75tZEOk3roDJZY1BNneX4ARLXfK4XtWTHZ78Js+9uBM1U/pmU8Z SAXgsrN0XX/zuXbVIpoXhfgCN7i3ZFRYPqVDzwWVEozZSGe+lWdzj95sxxA54Pc+e1bK P1goTgCC5dIgaUowJh+GK/qUFmuutJ/eGsZJipLj4I3gfRjU6I5mvzGnf/jCplM4h5K9 YyRIQttNVaJO7ouh9Ek8SQJJeo1m6kvdJJoGjzvs2x7+PGW1iBOZao/I4VzwD189PVCH En7A== X-Forwarded-Encrypted: i=1; AJvYcCXkehhwu8xkr9Ry0tfSN8WAlg4Wb2GB9pmduqWmYFytgOT9DlLrbWoZSpq88b28K/XiwMi/0VliHW8UUXjpTmldUQI= X-Gm-Message-State: AOJu0YzZRQ+CNm5BiW43xlLEFyfbyDuL97KPsvKGKHu10+OYRv+JJs54 bjJh3yb8gDZ9ufvFZzkGI/PNK0ZdRpqRAv559g/JHg4TE8bKLpgY X-Google-Smtp-Source: AGHT+IFDhw5CyPF+647vuIacpDoCF7sesz2n+pQxSAMpRsRTA2HDFMORH760KNbesaRHRf86R06Y5w== X-Received: by 2002:a05:6602:2d81:b0:7eb:6cbc:8856 with SMTP id ca18e2360f4ac-7ebeb49bc4bmr137386739f.2.1718329937806; Thu, 13 Jun 2024 18:52:17 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::304f]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-705cc96d4c3sm2000912b3a.59.2024.06.13.18.52.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Jun 2024 18:52:17 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang , Barry Song Subject: [PATCH v8 1/3] mm/rmap: remove duplicated exit code in pagewalk loop Date: Fri, 14 Jun 2024 09:51:36 +0800 Message-Id: <20240614015138.31461-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240614015138.31461-1-ioworker0@gmail.com> References: <20240614015138.31461-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: CADAD180007 X-Stat-Signature: 7pry4ot9f54drpikqzn1g8wpxsj1ed5u X-HE-Tag: 1718329938-943289 X-HE-Meta: U2FsdGVkX18dtre8FZ2T+yZJwicJXaF1947hNt8BI6MjntYg6sOQVSDE3iBs617uIoHDRlbr+lh0ZF1h163HRran9kVWMVchUmCy6L5FDUyKlfuiZMSwC1F1XkWnfRFDNLgF0obJmIzg46wUAuGFvRX23toouIoAkJjtUIfZSRsglDp89Tm217gk0WYakNlLKME+YJnF7HUg0SRZk1lpQZ6659/8rWeXO6Y7aUxVfCABVobZ8U4DbNt22XQnkT5z/gDZ3P22ht2w3jj8uhuZMu60VCOyRNvAd+fo/SvW8lXFoaRLi0jlHVW4lBXscplfLEA6ZxVEXLZjOuoffuvyX7tgVP4tpPVjcIr/bnvVBbKwt0HnJbaUeB6woopkxI/ftNdZG1W/TmWvAlZ+5j+BvaxlXYKnpUEGB4G/2stEKjppwqeP1bkLQK/1tPcACeU5ZppN29tDbe9GsflY4jaomdYkD5mt6f2e5QfY53eKokLp8TLbqomn14gx4uUTSo2Ljp/Mn22yUnFrGag9TEhLLsyXw1JIten9IYTgDnXf+NlXfREc7QaSgY0Vll1NNR2wvrK+wL6czBX8jDoDkxzRFyWl02UIrPcKuTgyOY3Du6SmeifIrbi+Y4z0ZYJJB/ERoUJPLE8dNmTqQod6bfMknHNAwwVzwJusdd7V0AS6AgeiwBd0JPEALaQGMzMoSl1o+vd2v4AKWJ9RxqYH8rRNxV//mRPDPjkenPd02RPimcwJ8+6eUyXUmPMFloq+FVnmj0+RGBOfwpzS0W+/pF4faTP1FxoAasU8mrRAbKn3cQ8O4nTJoSkkdykorYVNZoNexsvn/VRX6O+av5C9drnBcVG/YszgvATxJhZfiuNQCjO9zrBknfF1Q5Z70BCL2BnFODkmhIgJjlIB9mAL5BhC1gbZO008cvNov5rZW8awL8RTwwile0OHNlAYy5HZpbUNKCKNI/I2MT6Ft1JW61c m3aPOWwh rUokh53CjLNGt73ejHJTKhAZF/UHl7KRVTe69uobanixRVrVh8gsixHu5POpI5WBzGrHI8DJc2CmqlpuxaBShsMEe+DQTP/rVWDsRhQvH9/MxhmMIhzZo5azgliHah+3WuYSL9qxs2TGlENRJFYtrG393PYiJAef0OMYVNX/rKaPhwZFhXw7pdWoLzgSZONtvDI3wJErrzdzPfio5EjEU7CqSp1NUN5gSEWsXrmF9VZUuviA/TaZT+DUtW7sn00zCnFflrJ9pA0TQYHaE9/VNND8KD2/37NrVJ5OD2CtEx7HjsdwvNXKuI/L9eVuNWHuz+iOkAZydhqHv2NSoLc0lKveXPmDVBHfD9OKw6d4FHAw0bh5W6y/f6dE8/12NZJFOlA0pvCLR5GZyJNi1WRlU/aEib+cSVo1IWp4hxd+xSSKRw/nwHH2cqrdGfcozEEpcV+HsE9PB/cJuQ3Wkli9eIzf5AEDzwtDtwPjmcVonbGu67rs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce the labels walk_done and walk_abort as exit points to eliminate duplicated exit code in the pagewalk loop. Reviewed-by: Zi Yan Reviewed-by: Baolin Wang Reviewed-by: David Hildenbrand Reviewed-by: Barry Song Signed-off-by: Lance Yang --- mm/rmap.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index ae250b2b4d55..2d778725e4f5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1681,9 +1681,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* Restore the mlock which got missed */ if (!folio_test_large(folio)) mlock_vma_folio(folio, vma); - page_vma_mapped_walk_done(&pvmw); - ret = false; - break; + goto walk_abort; } pfn = pte_pfn(ptep_get(pvmw.pte)); @@ -1721,11 +1719,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ if (!anon) { VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); - if (!hugetlb_vma_trylock_write(vma)) { - page_vma_mapped_walk_done(&pvmw); - ret = false; - break; - } + if (!hugetlb_vma_trylock_write(vma)) + goto walk_abort; if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, @@ -1740,8 +1735,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * actual page and drop map count * to zero. */ - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_done; } hugetlb_vma_unlock_write(vma); } @@ -1813,9 +1807,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (unlikely(folio_test_swapbacked(folio) != folio_test_swapcache(folio))) { WARN_ON_ONCE(1); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_abort; } /* MADV_FREE page check */ @@ -1854,23 +1846,17 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ set_pte_at(mm, address, pvmw.pte, pteval); folio_set_swapbacked(folio); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_abort; } if (swap_duplicate(entry) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_abort; } if (arch_unmap_one(mm, vma, address, pteval) < 0) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_abort; } /* See folio_try_share_anon_rmap(): clear PTE first. */ @@ -1878,9 +1864,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, folio_try_share_anon_rmap_pte(folio, subpage)) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; + goto walk_abort; } if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); @@ -1920,6 +1904,12 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); + continue; +walk_abort: + ret = false; +walk_done: + page_vma_mapped_walk_done(&pvmw); + break; } mmu_notifier_invalidate_range_end(&range); From patchwork Fri Jun 14 01:51:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13697715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B26BDC27C7B for ; Fri, 14 Jun 2024 01:54:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 687F96B009D; Thu, 13 Jun 2024 21:52:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 638036B00A0; Thu, 13 Jun 2024 21:52:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4FFD86B00A2; Thu, 13 Jun 2024 21:52:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 306B86B009D for ; Thu, 13 Jun 2024 21:52:28 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A20F41203E0 for ; Fri, 14 Jun 2024 01:52:27 +0000 (UTC) X-FDA: 82227819534.11.7B9BC91 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf09.hostedemail.com (Postfix) with ESMTP id BE25E14000C for ; Fri, 14 Jun 2024 01:52:25 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ki6lGXwu; spf=pass (imf09.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718329944; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oHYSOK8Z4m4YeRSSbyiRoNr7hsC5jb5TLSRL1KlNI3w=; b=zx8MO9MZIBjpCMT0bWdvMvZEVI9fOD/a+5jzQ2FLsak6+dm0mrUBJT2Q5UgGaJOfQK6bAa nrNgGIT8EHxoDvipKdteU0r5SpQl2OC84hNVw26mIsmZZd8RhkESnAay7HH3BHFXAcDxcb by2gOdikExBfTYH/DN81tfNCakkyU/c= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ki6lGXwu; spf=pass (imf09.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718329944; a=rsa-sha256; cv=none; b=Vil7ZiS7rPmd5M/Ij0OsGNubqgEq5Ym9VAi4bzrLo6QzIAeto2pWXlNqYLoHeelRAOuStD mVjkCQrcaIVbxs39mxeGBjFJMObBIRARcKL8/fqO5y+8PpnFkosugg/1MlAb9E5+0+Y6Eu 8zBEOM6rwX93eIqasGFGqblIE3Ugdg8= Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-705c0115192so1493038b3a.1 for ; Thu, 13 Jun 2024 18:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718329944; x=1718934744; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oHYSOK8Z4m4YeRSSbyiRoNr7hsC5jb5TLSRL1KlNI3w=; b=Ki6lGXwuO5CK2pZNblssPMqopgtxqrJ+x8zUHqVnw1MRWQmCtIOVot8GrojpscfCEo e11GFlEnK/F1V7ZuyA3RitG2qmqAPqA9vMl8vpPKFezjJBVgY35DnorwpXmM0JR3EOgM jQLdNMjJqBTim+E/F2P+HlaqqulcK96CJRKHbT7anv5M+9tsSEUfCZuYeaxXq7O+LR2N VD7NPK/Bi5TUl8J0D/rEGcDWV9g/dfWZIAAE1C+5jOfyzneFzxVflEqeQv6nrnykSuCq l/YEDgadAfwH7cgPgEcLUsi1lXIfWKhvTRYeOII3r0nHDNk8JvQtE295d+5q9Kz7DCqx 9Gaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718329944; x=1718934744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oHYSOK8Z4m4YeRSSbyiRoNr7hsC5jb5TLSRL1KlNI3w=; b=isvW5/S/DWHkMtbtFPBDhQEcWfPShcvZ3hprOVEt3XKlYFu8Tpa49e+cHLnCd5VY/Q Ah/92rAi6frkxmwb+dLcVSGqP/4g7Pu5lWKzMt1SS9zWCpX3/2LPr8koyrYBi32FjmiG +Ai2j8tuB1H6P3Hptc2JwIhXoxKRNheT2BiWn2Ri84H0g/NuAQ9HQpBO78/nKgxThvcY poo6LR9tNc4MInVjTBCzN+acjJD8AdLf7LGXCsEDMFGSJgYLidFVm2WPUMFVszHE2PPj RxCDat9/QXUovIaEpGOskSzM85o073zbnmeGBGUS7IJizhEv3lnzV+7HsM+Mvwi6foDX bIFQ== X-Forwarded-Encrypted: i=1; AJvYcCVjW/Bx6rlma3/V1+sewXZ7o1/wMdUtZIE06jUQ2cmJnyq9j8QUvca+SB2hK8Ymhxhk6RegOzW4qJ6EFzfqnHilhVU= X-Gm-Message-State: AOJu0YyPmrjdreDJ6dVx2MbBlf44RtYTZed84/LbeIOWTDuBtkDrVJfb iYxx+seoKFcxshyCzbf1HKtv+F01MIKYcxoaIOFOAdVNkltfdVFY X-Google-Smtp-Source: AGHT+IETJKQrVb03Zu4RAVjG/Fn0yAZeYe0tcBOczpJ9Oj2zKKIy6GQvO0xxxOPWMdvyD7IL203kLg== X-Received: by 2002:a05:6a20:3d89:b0:1b5:d10a:1b70 with SMTP id adf61e73a8af0-1bae7d840cfmr2048147637.11.1718329943752; Thu, 13 Jun 2024 18:52:23 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::304f]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-705cc96d4c3sm2000912b3a.59.2024.06.13.18.52.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Jun 2024 18:52:23 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v8 2/3] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop Date: Fri, 14 Jun 2024 09:51:37 +0800 Message-Id: <20240614015138.31461-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240614015138.31461-1-ioworker0@gmail.com> References: <20240614015138.31461-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: BE25E14000C X-Stat-Signature: wx3a88196967j4aasyxkfffzhoti9mhs X-Rspam-User: X-HE-Tag: 1718329945-282720 X-HE-Meta: U2FsdGVkX18tqTI809FDshG7SMR/ZktoFaphPIiHxJMvZlG1ktg0lbtz8ALuHaM/pSCyvep3hcuvOfDFaR49uXX/ehHWEqkzlvEQiwrj+JVkeDwKznPSxkLwbjeZePXiqgV2/lqrDIHDgb6cYPD6nHV5xopWkZh0w2+ocudOPkPNkE/Iev9Vhxv4KBpbZ6Ha9ef5cPKGavRsvUGwbKcgzXbJc5xJTmJZTN9/QWK5taks8ro80GHrDWge7LIHbgULgBiVGtLHljbA0hwnhpvWA2DMBgjkhF383BGidvD5pcY6c21vs4AVT6AcVHABBhaJqB1a4J/QzklFgChp8i2DDHGjzndeGGGO0TCUgjWCtOX2o/A9UXwFTAvcwL9mHEpEwxttak8yxFIUWoDSfHfKwMFtn7nDQ8FTldpSC2TEzuzTIIw3oveb0SKXpz9DwJRKf0iNWChNkEhE94jz+bP2aZ67/IJIFfasrmZpKTjRXKhzfMvwaiNg4JmCayFhOIuAdVKP2PpoR3E3rxEGtsKsUM+suEUDwZE+/7ZmJijm439I147bI6SHcy+QkMIw2g/QFvQAPoTOdaq0vO0hmtptaaOqrdBKLfR4N+aIQWjFhrxGhttpS/YjEQsNdKcHmTRjb1abt7VcEQEsiJnx7eng8245wfEJxPVn82leldnALDHelbkYW1hJqrl9mLCbcanIhyTH8aUW/9DWu0I5eIaC+NvCMttgC6hFvTD2nAgnSjTOfhcdF1pWTDgLaFFBmqqPrCtMi2/O48vqGw/4ADLZataVMfiBW0NrNF2N6Ddu1xkBsjMtyt4c/FES31tC/HZitaquRmb9Tjequx1XPu8cGF5ewPsTAefH/pRF9eDWzpNsPoINm65Qyscbar9WGHtiXemQkk1XIBzm1WliGbdJ9ZlDPN5S+7QCP2jGLLhYvRLYtQ1fz0yKjWFkBbsMRb7xTa1y9jy4mxQ/KXoRtRE fLgqU+RB 9k5D7z9FcXSLL3+OY38EXytcPqtfQ3sDy68pgkCXKwk7Ndj9A9MPCa4LHSL/otvmk+R0/IuqsyzI/XarH/gfGtgFr4fIlpB4NiWh+eqXNbCT/uPvlTN7tt2y7wQTMxj1QWYQdBX+LJXjqJ608zFjh2fjgIeAyfxhZnMPRxcCcS6SSWtJj+Q9+8e4deelS0COfXFXwSumLcxEmHp8LehguWzKddap5NxmTe+YnkcKaEM/v1m7Vz8A8nnFTA6GhIeW6jzK2kOU3VLWj9KWrpqayuM9hPUeD6JbPjcgeW7ZpWMZ8aq7wOA03aAz7elHyBWlcY7qL0b0WYacX8Jdr7uWAwSG7LCPlYSnIqTRvhuQ95eU6JI4uWq8f/9sUsNmVoBDOjV3MaOQITrlDXLXxJpZPDu3AazXkqvioYQtY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for supporting try_to_unmap_one() to unmap PMD-mapped folios, start the pagewalk first, then call split_huge_pmd_address() to split the folio. Suggested-by: David Hildenbrand Suggested-by: Baolin Wang Signed-off-by: Lance Yang Acked-by: David Hildenbrand Acked-by: Zi Yan --- include/linux/huge_mm.h | 6 ++++++ include/linux/rmap.h | 24 +++++++++++++++++++++++ mm/huge_memory.c | 42 +++++++++++++++++++++-------------------- mm/rmap.c | 21 +++++++++++++++------ 4 files changed, 67 insertions(+), 26 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7ad41de5eaea..9f720b0731c4 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -428,6 +428,9 @@ static inline bool thp_migration_supported(void) return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION); } +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmd, bool freeze, struct folio *folio); + #else /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline bool folio_test_pmd_mappable(struct folio *folio) @@ -490,6 +493,9 @@ static inline void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct folio *folio) {} static inline void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, bool freeze, struct folio *folio) {} +static inline void split_huge_pmd_locked(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmd, + bool freeze, struct folio *folio) {} #define split_huge_pud(__vma, __pmd, __address) \ do { } while (0) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0fd9bebce54c..d1c5e2d694b2 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -703,6 +703,30 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) spin_unlock(pvmw->ptl); } +/** + * page_vma_mapped_walk_restart - Restart the page table walk. + * @pvmw: Pointer to struct page_vma_mapped_walk. + * + * It restarts the page table walk when changes occur in the page + * table, such as splitting a PMD. Ensures that the PTL held during + * the previous walk is released and resets the state to allow for + * a new walk starting at the current address stored in pvmw->address. + */ +static inline void +page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw) +{ + WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte); + + if (likely(pvmw->ptl)) + spin_unlock(pvmw->ptl); + else + WARN_ON_ONCE(1); + + pvmw->ptl = NULL; + pvmw->pmd = NULL; + pvmw->pte = NULL; +} + bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 70d20fefc6db..e766d3f3a302 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2582,6 +2582,27 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, pmd_populate(mm, pmd, pgtable); } +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmd, bool freeze, struct folio *folio) +{ + VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio)); + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); + VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); + VM_BUG_ON(freeze && !folio); + + /* + * When the caller requests to set up a migration entry, we + * require a folio to check the PMD against. Otherwise, there + * is a risk of replacing the wrong folio. + */ + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || + is_pmd_migration_entry(*pmd)) { + if (folio && folio != pmd_folio(*pmd)) + return; + __split_huge_pmd_locked(vma, pmd, address, freeze); + } +} + void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct folio *folio) { @@ -2593,26 +2614,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl = pmd_lock(vma->vm_mm, pmd); - - /* - * If caller asks to setup a migration entry, we need a folio to check - * pmd against. Otherwise we can end up replacing wrong folio. - */ - VM_BUG_ON(freeze && !folio); - VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); - - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { - /* - * It's safe to call pmd_page when folio is set because it's - * guaranteed that pmd is present. - */ - if (folio && folio != pmd_folio(*pmd)) - goto out; - __split_huge_pmd_locked(vma, pmd, range.start, freeze); - } - -out: + split_huge_pmd_locked(vma, range.start, pmd, freeze, folio); spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); } diff --git a/mm/rmap.c b/mm/rmap.c index 2d778725e4f5..dacf24bc82f0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1642,9 +1642,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (flags & TTU_SYNC) pvmw.flags = PVMW_SYNC; - if (flags & TTU_SPLIT_HUGE_PMD) - split_huge_pmd_address(vma, address, false, folio); - /* * For THP, we have to assume the worse case ie pmd for invalidation. * For hugetlb, it could be much worse if we need to do pud @@ -1670,9 +1667,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { - /* Unexpected PMD-mapped THP? */ - VM_BUG_ON_FOLIO(!pvmw.pte, folio); - /* * If the folio is in an mlock()d vma, we must not swap it out. */ @@ -1684,6 +1678,21 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, goto walk_abort; } + if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { + /* + * We temporarily have to drop the PTL and start once + * again from that now-PTE-mapped page table. + */ + split_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, + false, folio); + flags &= ~TTU_SPLIT_HUGE_PMD; + page_vma_mapped_walk_restart(&pvmw); + continue; + } + + /* Unexpected PMD-mapped THP? */ + VM_BUG_ON_FOLIO(!pvmw.pte, folio); + pfn = pte_pfn(ptep_get(pvmw.pte)); subpage = folio_page(folio, pfn - folio_pfn(folio)); address = pvmw.address; From patchwork Fri Jun 14 01:51:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13697716 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 386C5C27C6E for ; Fri, 14 Jun 2024 01:54:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 416BC6B00A0; Thu, 13 Jun 2024 21:52:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C7726B00A2; Thu, 13 Jun 2024 21:52:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 268436B00A4; Thu, 13 Jun 2024 21:52:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0AAAD6B00A0 for ; Thu, 13 Jun 2024 21:52:33 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id BA5FCA16E4 for ; Fri, 14 Jun 2024 01:52:32 +0000 (UTC) X-FDA: 82227819744.14.FC5228D Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf03.hostedemail.com (Postfix) with ESMTP id E15FB20007 for ; Fri, 14 Jun 2024 01:52:30 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mLO7Ps6O; spf=pass (imf03.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718329948; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h/pT6ukJbZTbcHICd4lFILO7tOygQ9w7tdWCnznoYp8=; b=5g8uZ7OnuCRzsIgeqVG5SBiJW8I3/X+dxaJUzB8ciA6BdxWMFi9okyjLfno6PUl6ReF1iD AqFIJrmUemAe+uL2/q4toqo5AG7h8OqQvPLEnrbKIgtnwpznuiXf3UtbZh/CqMnWdiH2Hq agxcQ3g6SQlIOgloNWhAIabWsdKMt4E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718329948; a=rsa-sha256; cv=none; b=0lwMEPrvzze0JhxwhLspPfck7aozLaMoyfgrUPM7KD/NnS+FZYUr/ORYLo2h6coFBvghfz CqZ35Rpl+DfWWUtyzBZqC+xthcA29xTEr3IEVMd0xLWUK/bRDySOH8RS80rheRIDlkJnsw g0v8L6xyurQGdiZP0mw0F2+sifAUBnI= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mLO7Ps6O; spf=pass (imf03.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-70255d5ddc7so1445467b3a.3 for ; Thu, 13 Jun 2024 18:52:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718329950; x=1718934750; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h/pT6ukJbZTbcHICd4lFILO7tOygQ9w7tdWCnznoYp8=; b=mLO7Ps6OtSvOg6Lers6bLg0UaMzqPFMjtzKuR+V+3tvo/e/Tl5ET1GxXj4ffiIgEKm vVURLeZow03PMAGlzwncG/RIj3mh2vxXgHfISFF4jABd5w91bBMTO7KjQP7tCBeXqxLn oTOYJ0au8t8NDKGO19/P33IzK+Rn6aoYBGZYZ/+vlymt66s9WwAAo34s/20q16VUY7cI gkisXW6aUVhbxgfJ6xlD9LFNgEBMyBcFYPVx9HVA8IkC42mfWYspaql768PXTd3x82LK pdIzcCFcZDvTYuXGHde2Pyh528cT9Yc85mETZ8FHcYBSVRlD159FMvxi1DWuwgojestg 4+bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718329950; x=1718934750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h/pT6ukJbZTbcHICd4lFILO7tOygQ9w7tdWCnznoYp8=; b=oAKaqu6xJFalmKjAjxRFMmiqCqznSALC9IPx6W29mPO2ZKjFZHW+4+n7UXI+JDR8gm XzGnnvJSwzP6gqRaItRcV19xHnmlO5OKo50gU1NPEs5RtJ2rMzLgy2ZkuPAVkgGC25bl 0WU+qIywQIAaaBREIYe+gMNiAJ8s9LfR/JkmJ4IuSFxGTYQHNhkZxaOppWFsGHkknPLo ps1dFYWD2sAvabmjqO0hMN1VMDruiQLLvRf0CIundbnZ5CnqbfIrHNtN2jFzTgiQJbuU Qqvr8V7ri6qRXTUAfstK6CV0sCdrgQOwYfIXbWPSWwHnV4DiQycfMXPwwUTw/1hX1LXY 0SXA== X-Forwarded-Encrypted: i=1; AJvYcCWNCsz0s/GjPIa+NbZxsJa4l5+1Yxt1ve3jwMe+jybnZ+OrDHz5KSIXOWWOSHbcUjjpGsGviyyQrKCo5107PsMEgfE= X-Gm-Message-State: AOJu0YxbMa/hAETrW2sYx8xsMxAmpY2KoUaQ79MkOVUuCEzOyV1gwCeB SL4j8jS8e3N7U5v/lFH/gg5vy/Ft8JdrKOFaRQmGv2JhkX7c2wtx X-Google-Smtp-Source: AGHT+IHa1l1GAHgQyasvK4lvnN2hIWQ3RAuc8OqBpVBkUd1r2gMswUPTOL3yjTTBw1mSXUjvCm3kSg== X-Received: by 2002:a05:6a00:234c:b0:704:147b:d768 with SMTP id d2e1a72fcca58-705d71d0e4cmr1657112b3a.24.1718329949695; Thu, 13 Jun 2024 18:52:29 -0700 (PDT) Received: from LancedeMBP.lan.lan ([2403:2c80:6::304f]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-705cc96d4c3sm2000912b3a.59.2024.06.13.18.52.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Jun 2024 18:52:29 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v8 3/3] mm/vmscan: avoid split lazyfree THP during shrink_folio_list() Date: Fri, 14 Jun 2024 09:51:38 +0800 Message-Id: <20240614015138.31461-4-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240614015138.31461-1-ioworker0@gmail.com> References: <20240614015138.31461-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Stat-Signature: e5i8dc4r79d3yexgn5cab31uukd8etwi X-Rspamd-Queue-Id: E15FB20007 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1718329950-421635 X-HE-Meta: U2FsdGVkX1/evjiKrZyBgZPrx0KBhDMysDCbgecq8OeyW1b6bdsCQsYjnd5WmDpSlTeR03jCheUVq9ra3K0z9wAGOc+pOIxICIw/RHWQl3UI0YHlnHKlB8P2OnpdtxArWcW5aVc/p3YCkMvuOeKqlvX88e4LXtOspad6/eecofnw4HiK6dI/XSfb3h9QonU/c+xD2qO5prI0pkhFM4i7eHdSa7BF38FjOB110I3md/2nvtXYzhd+m8cArqJ4oeNbywoF1eL1NmCbnfcuDXOF2Eu8ytstXrpADhda5Su15QrONVOJRk6zWKqFkY0rypDeFe1Y8QszFuGn/O338C0L/qkIiknEWCiE2OVDFja5ZZoZduTZ4uDXV8Nj4l+uiv84/6vFugxbU0UE6vkYm3Y1SEYw053U8QOvVjWqnnQhEaABrXsyGxNSxaa0p2nfqUfwU8fk4ckaz2EhSBGz1U+ejaP4L+2Xwi7fnA9B5BuuTpRDw0Bu1Wt827JFcIeUtdST7P/LdWn7zBMliyTy1+DFdBoCTlKQ6SeR86gd/B+Q0hb9iB4aLuIHJLyCVrs0OOyeLb/NjbSyUxu4GKTfn3PAar9Q9tJpb9BEcz18jmSkjGeBkqkNPyhOEuya9ecoo6ehtwrhQ1PCqrpqpG/X4p5G380kcO2L7iXIm2jacBpIj94YOLu9NS6lsLdd+QHsErnbu7Zy9EQ7UDV0CbJNx3DlzV3kjTQdx64PR4TOjBaoQRJ2NxhAaH6/1O2VduXvBRDTQUW1MhqLkFhZkC67JBkPfNcg8SIkogDjoPYlIZYY6KZ5+cBNUmPJmruEBz4TZATTn3YXDj/xUuL0wNmL2zd+Pd2j+/qAgxivypfGW8itwe4uSq0jflZa1NpRZv03Mjdy6SkjPT+izBG4TsgZpU/DBG6ACDRV8Upj91tUpd6CKiU/UjLPchsuBw5MzphwbKbKGsnlAqVM0sMUyltFC+d MtPBbjmw 9afwvnZHw+2oITtpo4FLFOZMP/7bnMJEM5uoMCJez8TagaBAnt6ftjzKtmrKht44tH5Xqx1/I3FX2haF8bEg0aaWcZRURTh1cRnpIJHAvu4pYIjhM4phOC3Wqi8f45tmQfQ14hjMQbR9EhyIQy81OLN3GsmE77lGCXgk7/S0dHH0vTw/n1NMoEQhv4vxSzvN8Hk6v6u21cx9rZp0fjkvGWlhXIESzYgn5raMEbqHTRL4SwSWwEgrwR0LKtEqvBzGLfEmtfL5l9H5zZ4xipXyWXBRFCvkk/TCl2mojHEiSouvknMeN+pEkzZSsSBJq23X8Yz9AbRgaPPRBrwuF283tWQOmTYCcWrZVhOhvIVO4P5jJzjb5zS9/K9TZHhvPEJ/2ScuCJYrwag72IuCte6Hx0AUgl/Kvf9PWJlXnVVW1nB5tjy2jFAilOmO5nNsK5bSNNmkpkxDzsUijcns= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When the user no longer requires the pages, they would use madvise(MADV_FREE) to mark the pages as lazy free. Subsequently, they typically would not re-write to that memory again. During memory reclaim, if we detect that the large folio and its PMD are both still marked as clean and there are no unexpected references (such as GUP), so we can just discard the memory lazily, improving the efficiency of memory reclamation in this case. On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using mem_cgroup_force_empty() results in the following runtimes in seconds (shorter is better): -------------------------------------------- | Old | New | Change | -------------------------------------------- | 0.683426 | 0.049197 | -92.80% | -------------------------------------------- Suggested-by: Zi Yan Suggested-by: David Hildenbrand Signed-off-by: Lance Yang --- include/linux/huge_mm.h | 9 +++++ mm/huge_memory.c | 76 +++++++++++++++++++++++++++++++++++++++++ mm/rmap.c | 27 +++++++++------ 3 files changed, 102 insertions(+), 10 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 9f720b0731c4..212cca384d7e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -430,6 +430,8 @@ static inline bool thp_migration_supported(void) void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, bool freeze, struct folio *folio); +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio); #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -497,6 +499,13 @@ static inline void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, bool freeze, struct folio *folio) {} +static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, + struct folio *folio) +{ + return false; +} + #define split_huge_pud(__vma, __pmd, __address) \ do { } while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e766d3f3a302..425374ae06ed 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2688,6 +2688,82 @@ static void unmap_folio(struct folio *folio) try_to_unmap_flush(); } +static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, + struct folio *folio) +{ + VM_WARN_ON_FOLIO(folio_test_swapbacked(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); + + struct mm_struct *mm = vma->vm_mm; + int ref_count, map_count; + pmd_t orig_pmd = *pmdp; + struct page *page; + + if (unlikely(!pmd_present(orig_pmd) || !pmd_trans_huge(orig_pmd))) + return false; + + page = pmd_page(orig_pmd); + if (unlikely(page_folio(page) != folio)) + return false; + + if (folio_test_dirty(folio) || pmd_dirty(orig_pmd)) + return false; + + orig_pmd = pmdp_huge_clear_flush(vma, addr, pmdp); + + /* + * Syncing against concurrent GUP-fast: + * - clear PMD; barrier; read refcount + * - inc refcount; barrier; read PMD + */ + smp_mb(); + + ref_count = folio_ref_count(folio); + map_count = folio_mapcount(folio); + + /* + * Order reads for folio refcount and dirty flag + * (see comments in __remove_mapping()). + */ + smp_rmb(); + + /* + * If the folio or its PMD is redirtied at this point, or if there + * are unexpected references, we will give up to discard this folio + * and remap it. + * + * The only folio refs must be one from isolation plus the rmap(s). + */ + if (folio_test_dirty(folio) || pmd_dirty(orig_pmd) || + ref_count != map_count + 1) { + set_pmd_at(mm, addr, pmdp, orig_pmd); + return false; + } + + folio_remove_rmap_pmd(folio, page, vma); + zap_deposited_table(mm, pmdp); + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); + if (vma->vm_flags & VM_LOCKED) + mlock_drain_local(); + folio_put(folio); + + return true; +} + +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmdp, struct folio *folio) +{ + VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); + VM_WARN_ON_ONCE(!IS_ALIGNED(addr, HPAGE_PMD_SIZE)); + + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) + return __discard_anon_folio_pmd_locked(vma, addr, pmdp, folio); + + return false; +} + static void remap_page(struct folio *folio, unsigned long nr) { int i = 0; diff --git a/mm/rmap.c b/mm/rmap.c index dacf24bc82f0..7d97806f74cd 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1678,16 +1678,23 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, goto walk_abort; } - if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { - /* - * We temporarily have to drop the PTL and start once - * again from that now-PTE-mapped page table. - */ - split_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, - false, folio); - flags &= ~TTU_SPLIT_HUGE_PMD; - page_vma_mapped_walk_restart(&pvmw); - continue; + if (!pvmw.pte) { + if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, + folio)) + goto walk_done; + + if (flags & TTU_SPLIT_HUGE_PMD) { + /* + * We temporarily have to drop the PTL and start + * once again from that now-PTE-mapped page + * table. + */ + split_huge_pmd_locked(vma, pvmw.address, + pvmw.pmd, false, folio); + flags &= ~TTU_SPLIT_HUGE_PMD; + page_vma_mapped_walk_restart(&pvmw); + continue; + } } /* Unexpected PMD-mapped THP? */