From patchwork Thu Apr 18 13:44:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13634762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 395D0C04FF8 for ; Thu, 18 Apr 2024 13:45:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B23E6B0095; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 91DF86B0098; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78A736B0096; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 55B4B6B0093 for ; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 37B90A1467 for ; Thu, 18 Apr 2024 13:44:50 +0000 (UTC) X-FDA: 82022773140.19.357D97A Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf14.hostedemail.com (Postfix) with ESMTP id 5CBEF100002 for ; Thu, 18 Apr 2024 13:44:48 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="C0cl4/BE"; spf=pass (imf14.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713447888; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lkdNcUi/cFtD91tOAv56m6RXgS21TIueDzxwluwn/Lw=; b=szp0bSPkdEdnT71rQdrEkRwnc5jBA9ndxz9zouSke3TXxgdkM/Van08MpCBOXCE9jiENTc Rr73ZnUjIwEgiA5T2GYftLZbMNS1lwgON5kciX7qL6mInonnbFfVuiBY7fOzHoT1AzQmQd uP9RU7sd3gchcVOxKJU/RpqVBkJxq9U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713447888; a=rsa-sha256; cv=none; b=1exoEAX9wUjM1fvlF+PqRUMac/+Vmn75hjKzwPeyafkh2EQvMnG+Ve2XAzDV6cBvA0Ssyg xSRBD0f/6/GfyXH+qZd7XFP/ydaMSzvrz8VZgBAZVGsEN9ub5AA1OKy1bzw123EPp/BVjI qKK+vaEvEJ19kkzY67lEIXFYvfUBXQ0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="C0cl4/BE"; spf=pass (imf14.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1e504f58230so7813475ad.2 for ; Thu, 18 Apr 2024 06:44:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713447887; x=1714052687; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lkdNcUi/cFtD91tOAv56m6RXgS21TIueDzxwluwn/Lw=; b=C0cl4/BEB/5BjZ8Gfi4xnD1UT1UiMLRnntc7+JaI+xV7vsHwPtwKjeDEy0iiy3Loht Q5krdErkStYqwg4ZNZo0BCfshzfeZUQgHpkhjlavVPejcuTcoJFc7/mbpFRNmX/II35W +Eez1cYcZFECNY8PQYxElzlPlpYEw2izA66XHwKWkszQE+Rj8nOMRN5Id/V9C+Ghevor Oo3006hNsituKaTtN6M7y6AraqKkkwqh2WiBXmbwP9M2iSPCz0xZXtJfomF4Y22PBJbV kWFYL9In6SHF9XHvCX4LCU0uXznC/ZNaJkAczw55xMDZnM3rV1zjdrSSADkwVIVgoKV5 SAhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713447887; x=1714052687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lkdNcUi/cFtD91tOAv56m6RXgS21TIueDzxwluwn/Lw=; b=LUh31hOfgiv0/bYY7aNawhliS18F+acKDi9XipceUKkrXAnNcev1LZ+Ii51y7S3QA8 +yR0BeTczZOYliyRPz5Z/Mcn3y03fO0tjr3I9GZz+nsY117HEZhw8U+Pn6NgOcWFSlKt qv41u6ilQpn8pNloiCejPhciRBtEiupze9b/a9pWttG+fxhSoPjT8gpMFLHy8BD5B4cc kCizCbVOq5BaLeYmxLD1caYD0gMuKAM+AHT9EqwCn2q70SvvTpe2n6vGJJ5WjjUe2WFY 0FTDRbxp+gMBhfBQLr14CUTRpOEIBwwRvdECircFBrUmPDIQxgmSr+pABeiXmeegLcsH XWAA== X-Forwarded-Encrypted: i=1; AJvYcCW+fpF5Qvo2LMgmpouNC80k9uSBbGVVWNAoG6HULYUcqWEdBwwMA5SmWOiIUARrJohWl5DIvX1RCwQ7BuLy6RykiEA= X-Gm-Message-State: AOJu0YwXVaQF+vhbx+cF1RDjq91OXKEZKTRKY/unDaGRPKn2Ih2wug90 b1iBVTrJNcK+BvJhzr2vRY8zIKjLo2sPjwq5xY5NKpjcBumD0imu X-Google-Smtp-Source: AGHT+IG2GVYbLpKp4yUNNUcOXfACY+2bHpA67aILzi27vexy/CeFyGpA5xeZ5gYV+u3w513tsrVcFA== X-Received: by 2002:a17:902:c94e:b0:1e7:b775:64bd with SMTP id i14-20020a170902c94e00b001e7b77564bdmr3953276pla.53.1713447887193; Thu, 18 Apr 2024 06:44:47 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id d8-20020a170902b70800b001e4fdcf67desm1504837pls.299.2024.04.18.06.44.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Apr 2024 06:44:46 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v10 1/4] mm/madvise: introduce clear_young_dirty_ptes() batch helper Date: Thu, 18 Apr 2024 21:44:32 +0800 Message-Id: <20240418134435.6092-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240418134435.6092-1-ioworker0@gmail.com> References: <20240418134435.6092-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Stat-Signature: hrjrimmzguqi1njy7fjirj59wj6g6dgp X-Rspamd-Queue-Id: 5CBEF100002 X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1713447888-828863 X-HE-Meta: U2FsdGVkX19fPN7vH/F0xOcZIAvuhocMi7I2F/TEz0DH1bnDUZS9hhs3EKTmq5f9T9cXkBzORf6p6vZMqVMajz8OcJZ4fuDvjt1pA2Be/LCe2ZZUBmDmbWfAXSPdgo9X3Isp7Vjn/rEVzedX2PDl3raI9XLpkLUTwZQZl+Fs6aoxRsGwkje6RKNcEoH+LJGFjcoFtHDqAThJDHbJNoTQf4u1/OKnAqnMcalYVEzYDfttLeG/3xTpsRJ9aQiPQC+fRTKiVX02tKfrx1w2d3+xSNL/oIWXCv7J5ye6hnMF2W0oRq041r+AQbGxZeZOi0mjeHNKElddsbWL9HJFMLGj//vwaX4vWOSssAWctNGTCDSWDdxkNWpqjUupIojyUbgjVJQc65h/DOrdYLHAcIlYUkgo7Q54GzpkAw45aCrVz88qug82Cl1++6AqXqk3zH9/a3iGMg+iT6J/QRL+vKy5+tc074tWGOLPuFYFY6rPcAD8N1lbfxLkXzse/tFkiIxe6RbYq/XqwbJGLtfO2hShz/YLloyzwtOk3G+ii3XvOTTmHRPZm5iRWhBlMb1SfkM1M0UEJ99kWSd5K136lDRGHGu/mFjlCC7bUomf/iwuoE4W5X9SpNkRli02Hg+vIuaa7OYDNaXt+phfhlWxIihE2HYL1oEuVF7vh4FHZOvR8HbnQKoXHi01O6ISwBTUjMOkVXMnFADjwMpB3vCCdkmq9pAr0Dodv+G9E3F1t4hLoP/MyjAa1WUiyR4TsQE4KJRU885GekW7Egxel4QLvgldLxnopBWlRJLNbZ7C8hF9XPQAclFOGjF1D4Rk0XyBrWnqballmhB3TgWyZ/ziLE10x5pu8pVi/nBAxpP86r2Gn5XQS528A5jx66e8S7Fe2443i8drDWU5F2pVPpfXWjRwesrE134k1oKA1Zx7EqJnGDULNS8vNvWvFk9CkbUMNewod32/3lUumLtCwlppaeh 04Q8/co9 IbQDO+vIlHtvQpUs7opVpEFeZofb28pK0DEc+ehPNPk0FsAJfl75AASyTpoKV1uq5cWFyQJbk933WK0W04kDNZ/gEA2apE2KVi+INlbQ+E49QPCHFu99q185lzbUaNpYO3mEOa1FDb+HKLavOVQu8PymCvUVCIsosDeMuwti3VuyiS9cv/pijJgqPX7OKp2QWiikf6aRG54Ba8q2GJB3Iso0n0REuAiHYiT7SjDYpxvx+i9WNskyLSv4uL+iMlFee2Nj6vzqtmyWYVr8uAmk0RyzHaFCsEgLbor+jiJVsem/hpBrkM9TgzCEzHv4bsI0kFKxpX1lCHE0O4QetRkr45Ey8I6DmOtGnNu9KSHWz7Cez7+yG//OMxoTUJLxzGpfXdR5WMIA74U2/a5r+wEwoDiMxo3DWjz6IifWICd8EV+d1enLTHCn5aPFN3GDFZ47PV1TeVQ4SZL147xM1yVwkZx+qUppnHd7L1NqOQvwGZhL6dpeL+RqgXq1gdzMWn5x5oFIPEB5NFSK8dyJVe0F0/XKXeulHAAYEP5HkVKo0gE/ZwrvURH5ogMcz9yPkzKCls+td X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This commit introduces clear_young_dirty_ptes() to replace mkold_ptes(). By doing so, we can use the same function for both use cases (madvise_pageout and madvise_free), and it also provides the flexibility to only clear the dirty flag in the future if needed. Suggested-by: Ryan Roberts Acked-by: David Hildenbrand Reviewed-by: Ryan Roberts Signed-off-by: Lance Yang --- include/linux/mm_types.h | 9 +++++ include/linux/pgtable.h | 74 ++++++++++++++++++++++++---------------- mm/madvise.c | 3 +- 3 files changed, 55 insertions(+), 31 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index db0adf5721cc..24323c7d0bd4 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1368,6 +1368,15 @@ enum fault_flag { typedef unsigned int __bitwise zap_flags_t; +/* Flags for clear_young_dirty_ptes(). */ +typedef int __bitwise cydp_t; + +/* Clear the access bit */ +#define CYDP_CLEAR_YOUNG ((__force cydp_t)BIT(0)) + +/* Clear the dirty bit */ +#define CYDP_CLEAR_DIRTY ((__force cydp_t)BIT(1)) + /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each * other. Here is what they mean, and how to use them: diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index e2f45e22a6d1..18019f037bae 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -361,36 +361,6 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, } #endif -#ifndef mkold_ptes -/** - * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old. - * @vma: VMA the pages are mapped into. - * @addr: Address the first page is mapped at. - * @ptep: Page table pointer for the first entry. - * @nr: Number of entries to mark old. - * - * May be overridden by the architecture; otherwise, implemented as a simple - * loop over ptep_test_and_clear_young(). - * - * Note that PTE bits in the PTE range besides the PFN can differ. For example, - * some PTEs might be write-protected. - * - * Context: The caller holds the page table lock. The PTEs map consecutive - * pages that belong to the same folio. The PTEs are all in the same PMD. - */ -static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep, unsigned int nr) -{ - for (;;) { - ptep_test_and_clear_young(vma, addr, ptep); - if (--nr == 0) - break; - ptep++; - addr += PAGE_SIZE; - } -} -#endif - #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, @@ -489,6 +459,50 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, } #endif +#ifndef clear_young_dirty_ptes +/** + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the + * same folio as old/clean. + * @mm: Address space the pages are mapped into. + * @addr: Address the first page is mapped at. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to mark old/clean. + * @flags: Flags to modify the PTE batch semantics. + * + * May be overridden by the architecture; otherwise, implemented by + * get_and_clear/modify/set for each pte in the range. + * + * Note that PTE bits in the PTE range besides the PFN can differ. For example, + * some PTEs might be write-protected. + * + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD. + */ +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags) +{ + pte_t pte; + + for (;;) { + if (flags == CYDP_CLEAR_YOUNG) + ptep_test_and_clear_young(vma, addr, ptep); + else { + pte = ptep_get_and_clear(vma->vm_mm, addr, ptep); + if (flags & CYDP_CLEAR_YOUNG) + pte = pte_mkold(pte); + if (flags & CYDP_CLEAR_DIRTY) + pte = pte_mkclean(pte); + set_pte_at(vma->vm_mm, addr, ptep, pte); + } + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} +#endif + static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { diff --git a/mm/madvise.c b/mm/madvise.c index 4b869b682fd5..f5e3699e7b54 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -507,7 +507,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, continue; if (!pageout && pte_young(ptent)) { - mkold_ptes(vma, addr, pte, nr); + clear_young_dirty_ptes(vma, addr, pte, nr, + CYDP_CLEAR_YOUNG); tlb_remove_tlb_entries(tlb, pte, nr, addr); } From patchwork Thu Apr 18 13:44:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13634761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87BFEC4345F for ; Thu, 18 Apr 2024 13:44:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 121AA6B0092; Thu, 18 Apr 2024 09:44:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D2956B0093; Thu, 18 Apr 2024 09:44:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB4976B0095; Thu, 18 Apr 2024 09:44:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D00F86B0092 for ; Thu, 18 Apr 2024 09:44:57 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 749D11C0034 for ; Thu, 18 Apr 2024 13:44:57 +0000 (UTC) X-FDA: 82022773434.15.1C78A9D Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf17.hostedemail.com (Postfix) with ESMTP id 4B50540024 for ; Thu, 18 Apr 2024 13:44:52 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mmDU6TXp; spf=pass (imf17.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713447892; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XITq4/hgHmAHZ0ZHlNrd2qmUtOIaf33ePb9J0ZwzOSE=; b=qaLUI/x9GjLXaW32oR09yq6lCET2Ha+n5uEOkoQUeuEY8AYHGwZ4DsvcrG5nutKzoSaFNG bpzhDdam8sZJPP7DwqxozGVR1UKxOZKvwRI/Az72zCP2SqhfEwEpfYL5ZHLnHD7BsJ6pLW I/pVVRgpFedro3aSJ+VNbC+sTJ6k/fk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713447892; a=rsa-sha256; cv=none; b=m30zqNfWwLg3JXBV4Bp7IJEtTPlgkRE6FJRShuRw9oFvnUL6ImvGWCMosq9LezYk1grCc8 w7SsWpjUw/In/Q9hlqKEKg09t+vfb84ShwSlHZ0btHvT3V+8nIbBQni/W1CYr57r5h2sqe 228tnCTL4FkljCW8iyJKOOdOIEWRQkM= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mmDU6TXp; spf=pass (imf17.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1e4bf0b3e06so8590715ad.1 for ; Thu, 18 Apr 2024 06:44:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713447891; x=1714052691; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XITq4/hgHmAHZ0ZHlNrd2qmUtOIaf33ePb9J0ZwzOSE=; b=mmDU6TXpLazfs1y1LwVyYKxtxqQbCQFcSDVXCEUqgRCsxA3A/yOZtoJ9FIAvZT2Cm1 7/31me6yF2LmqGpypsLft18+qbbJSnDbfmzqt3uqp6l6IVOuMqLM09LEITjzuHw0S1qh OSFhXjmuvdlea13UVpahBGAKDVDF7/SWmEBN8QNM+tLnZh8rxxEvJ1ACx2caXLwQhKE1 VTFuE2Aq7+C4pEsGZfahI3wANg/9Yzx5JXK2WsNB2v+TCzLU7/ArBi33oqeD5Eyi6SHZ oL3TkPtrpRZPXyUwHa8jkeUIVlHYvb8os6RxiZ88PSZCByezekRuZQZA8qjqqohLekNR pVGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713447891; x=1714052691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XITq4/hgHmAHZ0ZHlNrd2qmUtOIaf33ePb9J0ZwzOSE=; b=RY5odBdMdB5WLrrtobz0A06zbbn5jS+/MjxZfaAiTSZOivryXQ+rnEYITeE1AJ5YCs rPql1V2+A3Xj71+1vUyDzPKtg+iCZT8bxqYmJYOcRlMj/+Z5r3mQSFD7ShlXHUETyY/t 6KiP+0ZDLvHfyC1mzxVXEzKL2Tvi9X89wnpdF80oA61HK0NETlL+XTiv2hu9ftI4xBEa IzPAm+X3YJsvn0eJOIRJlm+xnC8C99nK03Xgq6dncNIc3nLoHx5uARu/XhfuFLb9yh4s iLZUTvTS411VV4hza6mmEJlOx3qGz91yu+MVXbRZrSaWr19vt0nwkzhMXH83IGZYyKV/ +6kA== X-Forwarded-Encrypted: i=1; AJvYcCVoOcsMYqjtA7IubBnS4n89PQu1ViJc4Q6y7tfdDVeWOC1yez60KpkrNCgseun4ChDJ7SnKtvJl9YA7QYg1gnW2eQY= X-Gm-Message-State: AOJu0YyJ+/wrQOJZHXppsh4ZELs6Vq6p3NpsQpTgXeW3wUDdYUi26jQy I3n9Nq+PE/r8h+MQB0sIfsXY+K6NeM9cHul+GbSlHBaagiqKHvyK X-Google-Smtp-Source: AGHT+IFMvPYbcwWGZBEI/RPL6zIG2k4X3bC8gD2IhYt301UAgb/Og2L2LjKZC1QkdaZ8PQQDYIe8QA== X-Received: by 2002:a17:902:db0a:b0:1e2:1db3:ebeb with SMTP id m10-20020a170902db0a00b001e21db3ebebmr3985587plx.57.1713447891167; Thu, 18 Apr 2024 06:44:51 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id d8-20020a170902b70800b001e4fdcf67desm1504837pls.299.2024.04.18.06.44.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Apr 2024 06:44:50 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v10 2/4] mm/arm64: override clear_young_dirty_ptes() batch helper Date: Thu, 18 Apr 2024 21:44:33 +0800 Message-Id: <20240418134435.6092-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240418134435.6092-1-ioworker0@gmail.com> References: <20240418134435.6092-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4B50540024 X-Stat-Signature: qb1hnm79irqhk3k9bcm8og6ai8giofsq X-HE-Tag: 1713447892-610962 X-HE-Meta: U2FsdGVkX1+CJWpQ1bdPFgohDBDHc+f9MamoXpEphvkmhKDPFLw9uPHP2QdAmPJbgScrugymYF+bF/PZKzo63/RIVW+h9yBjrN0lUpvLPxd9Z417xHJHvedPTFHh4gOYbZP5107U2Vsl7mhv8YQRfv5MiAja2jUIfrAy+im4fYmF+GFH9mpjoUcHn/+7BHZqW50Qc0lZT/n1yV+sYqYi8hVs8by/9a4KNgbA0QIxjnfZf/jMEyS4DESy+6qoUZ1gCTSw61OVg58hmkMjWrVxmfORtH0xKgZsV6sn0OLOBB+fJ/r3LoaieHClzQsvkp/H7g1gl8afbZKOdrN93Wtczl4tg4PhA4rRyNS8GWwL7qFMydALBtbho/4w1CczYAPF9ubBp7F/Fk9eT6gmAKD7LlGx5JW6x5ANldWnHsSZRhk0A9Ufk13isxsY9qCJ7qVA2L8LpsoBUOI4wv43AGmHt9TSLKCxrtBFghnv8bR9OfAuP+1Vosp4biuJDX88PVJo2URRkfd6XxUQB5OO4NQvqFg9tS+poFu4QVvPfBEkPYRnt7tWsYSbkx+lnXFNsk/Y6MsU6AH8iA8plL6VN6TYR7RJUNM60Uvh3dqITzUgA87/EzqtqJ+VAM1gF6oQxJxUDJ4tj2RzYuMT/CKxpjzOh+NzAUswjHwhWJBpOg+anGQaMSr/FZdCkR42A7Wq8z8/yazH5R9PqK5Sk5zB0upVBp1Mj01X5MxRz/0KgJF/KwG6vIaDsxKKJprYpPvwqScqOIkLQPzw6UDUMjtBLe2SdHW05l6PIGxfW00zl5edBvgjgtsZ/bv+OsqnK2n0vxwdlYFLf4If9wF1fCf5N097STe9gZPm+sHycC2AB5uCA4Umr2J00mr5WdTEIBSgYcJAEXyD3+vGGW3KASRZqnrbDnRiRIHTXGql/Pfq02y/YBoSye4zW+qV4BTj6qbAaPLNlrYzK//x9s4Rq2Cv9OE IQmrzcM4 MPLfU6Ym7x3mT3pZRXBog46F+3rlSzqx6a0wOWNuqXuHZWmID20B/TFMNbMoYUlN+98zZF1+Dhy57GW7UzRYKDY3eaQgvCKP2db2mijq8lJYTMqSU4esBZAPSq/dvlyVpuozy1UD+199lB4IBSGYzpjh4QIi0WMFu0UKj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The per-pte get_and_clear/modify/set approach would result in unfolding/refolding for contpte mappings on arm64. So we need to override clear_young_dirty_ptes() for arm64 to avoid it. Suggested-by: Barry Song <21cnbao@gmail.com> Suggested-by: Ryan Roberts Reviewed-by: Ryan Roberts Signed-off-by: Lance Yang --- arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++ arch/arm64/mm/contpte.c | 29 +++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 9fd8613b2db2..1303d30287dc 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1223,6 +1223,46 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address, __ptep_set_wrprotect(mm, address, ptep); } +static inline void __clear_young_dirty_pte(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + pte_t pte, cydp_t flags) +{ + pte_t old_pte; + + do { + old_pte = pte; + + if (flags & CYDP_CLEAR_YOUNG) + pte = pte_mkold(pte); + if (flags & CYDP_CLEAR_DIRTY) + pte = pte_mkclean(pte); + + pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), + pte_val(old_pte), pte_val(pte)); + } while (pte_val(pte) != pte_val(old_pte)); +} + +static inline void __clear_young_dirty_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags) +{ + pte_t pte; + + for (;;) { + pte = __ptep_get(ptep); + + if (flags == (CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY)) + __set_pte(ptep, pte_mkclean(pte_mkold(pte))); + else + __clear_young_dirty_pte(vma, addr, ptep, pte, flags); + + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_PMDP_SET_WRPROTECT static inline void pmdp_set_wrprotect(struct mm_struct *mm, @@ -1379,6 +1419,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty); +extern void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags); static __always_inline void contpte_try_fold(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) @@ -1603,6 +1646,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty); } +#define clear_young_dirty_ptes clear_young_dirty_ptes +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags) +{ + if (likely(nr == 1 && !pte_cont(__ptep_get(ptep)))) + __clear_young_dirty_ptes(vma, addr, ptep, nr, flags); + else + contpte_clear_young_dirty_ptes(vma, addr, ptep, nr, flags); +} + #else /* CONFIG_ARM64_CONTPTE */ #define ptep_get __ptep_get @@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, #define wrprotect_ptes __wrprotect_ptes #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS #define ptep_set_access_flags __ptep_set_access_flags +#define clear_young_dirty_ptes __clear_young_dirty_ptes #endif /* CONFIG_ARM64_CONTPTE */ diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 1b64b4c3f8bf..9f9486de0004 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -361,6 +361,35 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes); +void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags) +{ + /* + * We can safely clear access/dirty without needing to unfold from + * the architectures perspective, even when contpte is set. If the + * range starts or ends midway through a contpte block, we can just + * expand to include the full contpte block. While this is not + * exactly what the core-mm asked for, it tracks access/dirty per + * folio, not per page. And since we only create a contpte block + * when it is covered by a single folio, we can get away with + * clearing access/dirty for the whole block. + */ + unsigned long start = addr; + unsigned long end = start + nr; + + if (pte_cont(__ptep_get(ptep + nr - 1))) + end = ALIGN(end, CONT_PTE_SIZE); + + if (pte_cont(__ptep_get(ptep))) { + start = ALIGN_DOWN(start, CONT_PTE_SIZE); + ptep = contpte_align_down(ptep); + } + + __clear_young_dirty_ptes(vma, start, ptep, end - start, flags); +} +EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes); + int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty) From patchwork Thu Apr 18 13:44:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13634763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA191C4345F for ; Thu, 18 Apr 2024 13:45:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E90746B0093; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DCB6B6B0096; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCD616B0099; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8A7326B0093 for ; Thu, 18 Apr 2024 09:44:59 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 343151A13C7 for ; Thu, 18 Apr 2024 13:44:59 +0000 (UTC) X-FDA: 82022773518.23.7CFF677 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf25.hostedemail.com (Postfix) with ESMTP id 596A5A0023 for ; Thu, 18 Apr 2024 13:44:56 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=EFJm8QRM; spf=pass (imf25.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713447896; a=rsa-sha256; cv=none; b=kPK3518/KbuOD9svJ46StRfp+Kkt/L5re282eulKE/EqabYX3hs09mOr+q/Z7YVzN2kjV+ mXjlrCINNJxaFd4W6lLfMowVqNnSQThxEajXPknaipCOjYzZtBtsqxm0IvPEGjjJ890f2Y XdrU0/f2mFLxeB0pkPsXwYvrTMGiZhc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=EFJm8QRM; spf=pass (imf25.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713447896; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2bptmvyfGYVn+izLGYKs1uKRCf7NdX4woHvJn9zQskE=; b=wVxFbHfsxacvigUCmdtnYuFDNLIp0i0MNYdvUxWCy2YRikzojxg94ccfybygYwGjEQ1PtZ no8IuPOwh31/PTlvkQf8FLvVIPPggVr/ejM9IBulKK1mJbxReFeRqZlh1OHZGrWl9F6g82 4GPA/RwCagXjsy02YUUuNCnRh9CyHbc= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1e83a2a4f2cso5238805ad.1 for ; Thu, 18 Apr 2024 06:44:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713447895; x=1714052695; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2bptmvyfGYVn+izLGYKs1uKRCf7NdX4woHvJn9zQskE=; b=EFJm8QRMQWgeGrS3hGAlkKb4ZJ5Mj/a3VIyoQ651nJPR0ADoaJCh0My0DVGxYD6BKP QI0d+7c6a+PP3V5sZOEwBIZXsumRG8QP4PZQv6sdEBoCOpNh6hhp3xUQKJ9J+z3nNdNF 9k+s2f58FPU8T/6TKw5I6y6bUa6bXDGWbM0/vlbbd/KgipjFMvbB41tb6Lxs+SKtAVpi vMIvm4+kxW50q0+KA3y5LIxWqbwnaLOD/+Fy3cKhvNQC73WxiJ/acSaK5Gg50tQ/7U03 112sxNaBgRD+Dy9gkI2liMQz8blvsNP+C+hyLXDKE+BXP+KuQSMqBg2EcMnIseyrGL7j DKWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713447895; x=1714052695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2bptmvyfGYVn+izLGYKs1uKRCf7NdX4woHvJn9zQskE=; b=aTK3c9IwR0QeF+JTsI5U9pHRP3z3hJJ/FPGIz2ytOxvo8lXIjg7d3ErOyWEKcqN86i bhHHCjJoaRXDgdbCg3X3txVcW8vT0ZSoiSvFPLag/EnjKfqR2So6o2++5Ueqh8X9KFs6 jEnpwmICZU3E+2PGU2Jw8QERidAqQdtq9JvODMQs8iU1bJez1UnmGXhWK64KmsRTVKW/ tr0b2+sDtizj6Xe+rIeigXtYWX6F7n+cjm68MPAaXlhf2AlIjn3+j0b4LJ5BD6mB1AXT t+8BOtArKUjwm+eizxzPOOfEMQX/VErsdn9QvUCy0XHN5Hbj3Ch+V7y78CvVkkcbtMM0 7Nyg== X-Forwarded-Encrypted: i=1; AJvYcCWTAqfbj/gal2Kmd8Fkk1Z/SM4osMBH3aRihWCipZPANOhJTzCjprIKskJAnEL6KiTy8aceR+ObSRhAbT/OUNA4upw= X-Gm-Message-State: AOJu0YzuD/i9svSUGusbjFZMIGD+LkovUaNkUJN+zfPYfH/epL2UUYGg jWtvKTxoYFtetXxhyK9KVD9j1jSEH7wMUmoj5AcHVpf6TTdxGVPS X-Google-Smtp-Source: AGHT+IFF3gVncyN3TR7hbP07PirPfqqnSr7IxJxLr9POXIPSQcFBSJypWZU1PP2Je3jZ98XwlW4btg== X-Received: by 2002:a17:902:f687:b0:1e4:3e67:2bbb with SMTP id l7-20020a170902f68700b001e43e672bbbmr2797169plg.48.1713447895139; Thu, 18 Apr 2024 06:44:55 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id d8-20020a170902b70800b001e4fdcf67desm1504837pls.299.2024.04.18.06.44.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Apr 2024 06:44:54 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v10 3/4] mm/memory: add any_dirty optional pointer to folio_pte_batch() Date: Thu, 18 Apr 2024 21:44:34 +0800 Message-Id: <20240418134435.6092-4-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240418134435.6092-1-ioworker0@gmail.com> References: <20240418134435.6092-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 596A5A0023 X-Stat-Signature: r9qug1uw98j3y1qwssw6k7wi3jikwndu X-Rspam-User: X-HE-Tag: 1713447896-878588 X-HE-Meta: U2FsdGVkX19O5iN5XPdpLm8EjfCcNHLPJX7Yya2Evh3aEfzC6AhCJToaxGZuqzEJ/JWNg1IC1U8mBiy8T0efZmqbj0KydAgFuz71fCdMftrQmnmzBZElynzDtX5NI8RdvSVIXzc2u276ilt3kIRHUZk9IWBaHEUi+Sm91re+4O1YlFmmiFwjVsbbyfIWBe9PO1jy2Q+zqqcthym/ZrhhK+EyVBmrc+nO0El2MiEt4bJEwRgxWOUSvfTbuo1sWvr5FlEl06uIOQ1rpQxC4zrG78J4KCypiHGIegUx2fEurQ+1ivgzpMzPjrXEHZHATqd919jWolpDanbjpnUgoS9s05yTXFCw7L+BUaJI9ZIrzHrtwWIFMVL0WjrLmWaIiScwJVS8zgFgl0ee6ZNAiOJBfO9UI+INlHhwfrgnpq94cBUwzaHNgeYlxsibNQaCG9v5aR7PFLl9zsIWUiyssasjr75u1OHQZum9+eOlnL1+phWcLN4h02czerJTcxRB0/9KfvAJ1wDtWJXBCaftmabJobMWPnKcY9uGMgycINj5DfPOvh7DZO0is4r7IFRoJhL0o9NmznHkz/GKOZ6KUEb+mVg+5rYxnobHeEc3IZadZTr5/rHNvzGybzhCOPkpFOoNct0L+Vq9DKle2FpDRcc+0qau7wN02tnvCVSLM6hCaNkebffmjMK1ihEq5NxGtF4kYkR6oj08VvFMy5aNxlF8rCpTKYI6LMJlizYRTbtJPcVb6GPSxppcw7yrBVguEl+awYePpX3ncw1VX7HaS0MXfifD9Y7ACSgxFnsmVuZWbDFVwenQdMVEhYJGw6ArykjIdnf22CDOYRb2EtbICJ7ddRcreDno/9QemdKgXFsHlixOnQWtf9P3CD6EORpm4J074KCY4RugXw6DcwdthKtRUHkKhiYQltANQ1WklS4Te/M7BV+KxwyVSBNvz95mlnORW78cH/2T0JTtf3csmvs LwGYy8JV rZ0iNEaov6DVeLjdYjUddUhW8+tZBEbGFgj5qEBbmoyXgu1jXDnT1O841u8+1OYVK69FAr2eVDDhsCVtWH53ivfSdsWwGRVlGBGQZqQHenJoacuNotS2u7KKsbzm9VfxVtp2N0rmYV+6NNIigWJQve+OlQzQCff39irao5+zvECtwVk+yVHiICPmenfrnsZKkzLRUudUdWH3X8/lsHdE8AfoeYIR0O6rkVOKQrTTlz1G1kT8D67RykWB2k0rU+apqQukmaIJ3cLoFwhi70qYtlU8upb3zMdKb8VJhxYQpQ9s6RtzIxaRrsNtWGP2SINK5QrFQOOWZ2Nv0mHz9516JG178/b9ypp7HQbQgrrloqT2hLXlRJLM1MJ21B5R4w4P3wnFoB9y/vkoIyXc21XZ30e2qPV0SqhGUcTRYLc+JwHRLp3jZ8iXiVE6yX4yzxNWs4pFn7pGFoP3dxqGZ29yREk36gnKLX3kFTSsqytlXpUQNrc/xXG5m+xAfVitMKxMkVQcJVHfZ/WGxoy8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This commit adds the any_dirty pointer as an optional parameter to folio_pte_batch() function. By using both the any_young and any_dirty pointers, madvise_free can make smarter decisions about whether to clear the PTEs when marking large folios as lazyfree. Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Signed-off-by: Lance Yang --- mm/internal.h | 12 ++++++++++-- mm/madvise.c | 19 ++++++++++++++----- mm/memory.c | 4 ++-- 3 files changed, 26 insertions(+), 9 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index c6483f73ec13..daa59cef85d7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) * first one is writable. * @any_young: Optional pointer to indicate whether any entry except the * first one is young. + * @any_dirty: Optional pointer to indicate whether any entry except the + * first one is dirty. * * Detect a PTE batch: consecutive (present) PTEs that map consecutive * pages of the same large folio. @@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) */ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, - bool *any_writable, bool *any_young) + bool *any_writable, bool *any_young, bool *any_dirty) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; - bool writable, young; + bool writable, young, dirty; int nr; if (any_writable) *any_writable = false; if (any_young) *any_young = false; + if (any_dirty) + *any_dirty = false; VM_WARN_ON_FOLIO(!pte_present(pte), folio); VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); @@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, writable = !!pte_write(pte); if (any_young) young = !!pte_young(pte); + if (any_dirty) + dirty = !!pte_dirty(pte); pte = __pte_batch_clear_ignored(pte, flags); if (!pte_same(pte, expected_pte)) @@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, *any_writable |= writable; if (any_young) *any_young |= young; + if (any_dirty) + *any_dirty |= dirty; nr = pte_batch_hint(ptep, pte); expected_pte = pte_advance_pfn(expected_pte, nr); diff --git a/mm/madvise.c b/mm/madvise.c index f5e3699e7b54..4597a3568e7e 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -321,6 +321,18 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma) file_permission(vma->vm_file, MAY_WRITE) == 0; } +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end, + struct folio *folio, pte_t *ptep, + pte_t pte, bool *any_young, + bool *any_dirty) +{ + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + int max_nr = (end - addr) / PAGE_SIZE; + + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, + any_young, any_dirty); +} + static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -456,13 +468,10 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, * next pte in the range. */ if (folio_test_large(folio)) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | - FPB_IGNORE_SOFT_DIRTY; - int max_nr = (end - addr) / PAGE_SIZE; bool any_young; - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, - fpb_flags, NULL, &any_young); + nr = madvise_folio_pte_batch(addr, end, folio, pte, + ptent, &any_young, NULL); if (any_young) ptent = pte_mkyoung(ptent); diff --git a/mm/memory.c b/mm/memory.c index 33d87b64d15d..9e07d1b9020c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma flags |= FPB_IGNORE_SOFT_DIRTY; nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags, - &any_writable, NULL); + &any_writable, NULL, NULL); folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, @@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, */ if (unlikely(folio_test_large(folio) && max_nr != 1)) { nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, - NULL, NULL); + NULL, NULL, NULL); zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr, addr, details, rss, force_flush, From patchwork Thu Apr 18 13:44:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13634764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2A0FC4345F for ; Thu, 18 Apr 2024 13:45:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 591746B009A; Thu, 18 Apr 2024 09:45:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53E946B009B; Thu, 18 Apr 2024 09:45:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B9FE6B009C; Thu, 18 Apr 2024 09:45:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1A0576B009A for ; Thu, 18 Apr 2024 09:45:10 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0C6E0A0EDC for ; Thu, 18 Apr 2024 13:45:03 +0000 (UTC) X-FDA: 82022773686.28.345A1DC Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf13.hostedemail.com (Postfix) with ESMTP id 3460120021 for ; Thu, 18 Apr 2024 13:45:00 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bBqdZNyB; spf=pass (imf13.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713447900; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jgQ7Fv4qoOj50C93E985pB5vnNpjpXSugVdGk65lNOA=; b=UK+3kOsrhFFBwG5N5MQZrEInhglwp/08joYpaFB3o5twMZktFF1cVINKTIbtdnT3Yy2yQw sNCX7dQdc/RNBHuGUSkg4i2OjmrHGdtbqHdt9hHSAQz2cqLZhjBPVxHnmn81P5RjTqMDCf fM46jeYZqqf2eP/+j++IohqfEj2Z8jw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713447900; a=rsa-sha256; cv=none; b=Vvo/2fgWgIyu7scIDIcfBFLC4QWEEPhrMbB+w/BV0YODgN++Ul19XBKPg8gQ3KeOlAc1Xh T7MeLMlAHuMqD+bqJkNxzJGn1HZOzFEz05VrDZDBDFGOP0HEKyLNFDYi2nOtguRtkhXj7K pd0se+LqXL/lEorWeRbC5GvzTh2wiGI= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bBqdZNyB; spf=pass (imf13.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1e4266673bbso8193165ad.2 for ; Thu, 18 Apr 2024 06:44:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713447899; x=1714052699; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jgQ7Fv4qoOj50C93E985pB5vnNpjpXSugVdGk65lNOA=; b=bBqdZNyB3jeTs/bBNPcTiSEQSnXj2JgSNfgPlWyAIMZksPMmByM6ZwocPZmOd7Vsw7 k+Bt1hHQHsd8YzqT0Gp8oRRfx7pOqtvD29VlUM0RXUnY64uchlyLe6F1sUR8mBLHYOou HRBDPLLJiMp0dNHstb04QrfkaDQHUmd8DG/VJbRRsMKjuvmWVHt0+GJ1CmixEPKqf9aK hir8gNBFw1x9pM25ySPXHxDTUdwSj9waT/AsHyqJR4kni/iF1EZkW47uEigYImHqLMCQ mBhB+JPK0aWFedi4ZOEEEViVzipm7LKHDIUkyNVQK2RUPR74aCLdh/8D53qhwb3+T+A+ L+WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713447899; x=1714052699; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jgQ7Fv4qoOj50C93E985pB5vnNpjpXSugVdGk65lNOA=; b=Wd0TSsIxJkBvHXgeEwGkwRnBXaHHokgaf7Z0hFnbS31V2u7Jwk54mDQTV05++9II4j aShNP3AMswyjJOK31qJlqrtSLWfOakY/9UxhVdTCgWO2ZU3EaKGOictsopPM5tFPZw9a jzm8CYsgE+XwQk6/nIlITJbnYa8cCS/Y0vW5odp7XHTgzlTuhR95CES5MFH2M3Ia/n5/ iukIY/C3ZeGmPXRaO8LMFyK2AC6XOHYObKzg8UVcpQ2lHzsXmJ6RSvRmQnJZvEGNUtjk IYLF1v3aDwvJ7HWxE/857O3APvpkc4wSHfUc44VsLmduB3gNVBtSXwyCjnY6ju3yZi9M t6Ow== X-Forwarded-Encrypted: i=1; AJvYcCVClG+jqlRPo0dwrjClIPubXvCzrW2JnI6kPd20h1HQ7mCgbieGsCDqHX6Ezg5dB0aFDGA518qW5uHg164T71cIu7g= X-Gm-Message-State: AOJu0YxssrOUIasTITUt9mh2gwsksJkKqGH9YyNwgQ/oAyBzE0/GQX2j gWWlrF60TnKtV/p8fpt8fe4P8Kj4gaMv72y89L42eslSIMFDWFXs X-Google-Smtp-Source: AGHT+IGGqaeZJIZi8LM0BZvZXjbrMIOv72J2g3c12uT7zMGxJh9Xc/5vCUs+3eTCkX6ArwbYFFIuZA== X-Received: by 2002:a17:902:c20d:b0:1de:f91:3cf3 with SMTP id 13-20020a170902c20d00b001de0f913cf3mr2714045pll.55.1713447899052; Thu, 18 Apr 2024 06:44:59 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id d8-20020a170902b70800b001e4fdcf67desm1504837pls.299.2024.04.18.06.44.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Apr 2024 06:44:58 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v10 4/4] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Date: Thu, 18 Apr 2024 21:44:35 +0800 Message-Id: <20240418134435.6092-5-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240418134435.6092-1-ioworker0@gmail.com> References: <20240418134435.6092-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3460120021 X-Rspam-User: X-Stat-Signature: cs4fumpu358jbmdps875nxhk9hb5yt96 X-HE-Tag: 1713447900-120198 X-HE-Meta: U2FsdGVkX18zGzWpF6dlW193gRP4OuTiaPrOAcl3dU1dwVV/PRQBeMYVtd8qSSouKIsHRtVqJEXzb3fFeeoQGYvFjXAxz3YIw8u3jZqjlR6IV9ApHORls2gXVbPlvObfOBAb17a9m8tjqA/z85jD83eQjLJyCRbxIBWILNd0FDS2KZ3bw5yYJvJflXyOx6riX4wCgXBpDj1NQR582j7cHPJioNq922uzY26K46odmSC25DttBXyENBTrxVOFDJdd+bbCvMSlgLx2yJnDpB/Ui2sQz2qQ+M+acXMD/2UU2Fqlzy1IkRFzktknzK7K+cR7NSkFfTWWyVyTwxRCwt6W/cYSuSYTcuDesshjsXxnarKtaZiYOvWJX0MIweVnBEHwmzuXYiD5biN3WiqtaI9IEtQAsYSD22GyCw/uJbve88R779/0t5E9OIP7fTJ87S8XnL68964Vg1MqHkB/0PqgqERj2TaEptpaW7Xr8NcbePRyWFOZ6NvmrrMRz5oJW4GoaEgJnBYwlxT+vnGqt8bLKwhZSuRheXcw859yWw3hMTa4I8DCMr9aA3nFKl7nHGeL9WXemwUsJY1sLsBTxg8PqRYc1X2VS0lQjw3kHtZlhknlfpq7CpDqZ1HoZa5AT39msQh0hL9vXXfP2wQCqZKcP46NYkqS35Rr1dn/KpeoywI5fwouvXBnq5Nb7TerKhDCmXZXbi/I+aM/EsEB8/O+onGAgxHQhtZqRDOOsJbhRtDYdcmGpsMNH2iHbf4nauZs4M//0qF1NJjQGlELzvLG5z/WbyjPWiv7kymp5auqbYygp3A66xMj0Eata1Yw+MAUHJnaE0+ONNK7WFNdcVFmFRPRsropETiEsfia/LxLFHfc2d2vO5djxZ88odrxZv1zBntHzRpMJg4Xc+y1UjbzcGE+ekecWbVmIVhFTNn2X0qKJG/BXx5M5oKoFPZOiQWpF6ZNP+j2Y3I3w8Q/jNK NzAKcVeh ugw0P8qYixJhs1Q1sEIhTGaddjUWdCYc7DgeSHlqIzLFeRTA6SmV3215HJx+EZVYp18PHV4xH4J0do3l3fbv1tQrW8bAyIowDF0VBYycQ0ULvQ19U0M5UJLsg0T6nc+xkOWkXmFRXSQfdn9zn3WmKn/nnz5wLnbl364qzDbp9qnTnmZi0gg3MNSMOLbLI3u43VO4DNCfFeDGRS49SYr4BzAZL1cOHr7LEDLPcQ9sPm60kmms8BeGfR7r6rPyco89x8sL/R5DL995/nP5z+yZEksoDh48TXyooIad0jrNYT1r0+76ClV8Wohi5Y9X2gecWAJBSpzQ1FsyoGf7rqWs2zDZS1o7SWUbRkj7gtE4cZgH5lQ0Ha83KeD85SBqy1XOnnzUjLNv2jOqqlMZFQCtEbGYrYH1fP46u8BFKIA2JB73brpgsLsi2A3pVrjsGCE4J02MEEOpBWnqzwpS347NMXpjnJK2gJJ/9+qywbxYhee0cWC6UEEKoaTYbQJQNsnL4BgfER77jVlVrHCuK3PqR5Y612acjENbY230tTQq7a3VvSgQHTbXsUpKoRpU0EC6KnYOVE2FTUDq1vq8O+16w8ELLwsjSVKbkeVAbxCx0KwaaYDo/ILzdvbQuZUe563sb0/mx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch optimizes lazyfreeing with PTE-mapped mTHP[1] (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio splitting if the large folio is fully mapped within the target range. If a large folio is locked or shared, or if we fail to split it, we just leave it in place and advance to the next PTE in the range. But note that the behavior is changed; previously, any failure of this sort would cause the entire operation to give up. As large folios become more common, sticking to the old way could result in wasted opportunities. On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of the same size results in the following runtimes for madvise(MADV_FREE) in seconds (shorter is better): Folio Size | Old | New | Change ------------------------------------------ 4KiB | 0.590251 | 0.590259 | 0% 16KiB | 2.990447 | 0.185655 | -94% 32KiB | 2.547831 | 0.104870 | -95% 64KiB | 2.457796 | 0.052812 | -97% 128KiB | 2.281034 | 0.032777 | -99% 256KiB | 2.230387 | 0.017496 | -99% 512KiB | 2.189106 | 0.010781 | -99% 1024KiB | 2.183949 | 0.007753 | -99% 2048KiB | 0.002799 | 0.002804 | 0% [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com Reviewed-by: Ryan Roberts Acked-by: David Hildenbrand Signed-off-by: Lance Yang --- mm/madvise.c | 85 +++++++++++++++++++++++++++------------------------- 1 file changed, 44 insertions(+), 41 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 4597a3568e7e..ed125ad8a21e 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -643,6 +643,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { + const cydp_t cydp_flags = CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY; struct mmu_gather *tlb = walk->private; struct mm_struct *mm = tlb->mm; struct vm_area_struct *vma = walk->vma; @@ -697,44 +698,57 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, continue; /* - * If pmd isn't transhuge but the folio is large and - * is owned by only this process, split it and - * deactivate all pages. + * If we encounter a large folio, only split it if it is not + * fully mapped within the range we are operating on. Otherwise + * leave it as is so that it can be marked as lazyfree. If we + * fail to split a folio, leave it in place and advance to the + * next pte in the range. */ if (folio_test_large(folio)) { - int err; + bool any_young, any_dirty; - if (folio_likely_mapped_shared(folio)) - break; - if (!folio_trylock(folio)) - break; - folio_get(folio); - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - if (err) - break; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); - if (!start_pte) - break; - arch_enter_lazy_mmu_mode(); - pte--; - addr -= PAGE_SIZE; - continue; + nr = madvise_folio_pte_batch(addr, end, folio, pte, + ptent, &any_young, &any_dirty); + + if (nr < folio_nr_pages(folio)) { + int err; + + if (folio_likely_mapped_shared(folio)) + continue; + if (!folio_trylock(folio)) + continue; + folio_get(folio); + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(start_pte, ptl); + start_pte = NULL; + err = split_folio(folio); + folio_unlock(folio); + folio_put(folio); + pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + start_pte = pte; + if (!start_pte) + break; + arch_enter_lazy_mmu_mode(); + if (!err) + nr = 0; + continue; + } + + if (any_young) + ptent = pte_mkyoung(ptent); + if (any_dirty) + ptent = pte_mkdirty(ptent); } if (folio_test_swapcache(folio) || folio_test_dirty(folio)) { if (!folio_trylock(folio)) continue; /* - * If folio is shared with others, we mustn't clear - * the folio's dirty flag. + * If we have a large folio at this point, we know it is + * fully mapped so if its mapcount is the same as its + * number of pages, it must be exclusive. */ - if (folio_mapcount(folio) != 1) { + if (folio_mapcount(folio) != folio_nr_pages(folio)) { folio_unlock(folio); continue; } @@ -750,19 +764,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, } if (pte_young(ptent) || pte_dirty(ptent)) { - /* - * Some of architecture(ex, PPC) don't update TLB - * with set_pte_at and tlb_remove_tlb_entry so for - * the portability, remap the pte with old|clean - * after pte clearing. - */ - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - - ptent = pte_mkold(ptent); - ptent = pte_mkclean(ptent); - set_pte_at(mm, addr, pte, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); + clear_young_dirty_ptes(vma, addr, pte, nr, cydp_flags); + tlb_remove_tlb_entries(tlb, pte, nr, addr); } folio_mark_lazyfree(folio); }