From patchwork Thu Sep 26 06:46:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13812889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCB1BCCF9E9 for ; Thu, 26 Sep 2024 06:57:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ffdIJrLHJ9HLo1zmy1fROVnOG/sDLfcmik2lSMoQorU=; b=FyN5/g6dN4bNzB6q/5acDOR93V neMz/66BdA48bvfEv4wgcqtuvrynRZ+R1XEjfmggbxw6viaJlvKc56DOe9gVPI5DEUbqxg4rN4Wz8 /TBjU61My0XQSmBILtvPgbfqsJNcA/2AbAaOE3xGfETR1KYPnMSV05auQxF+PhqWUv4lflw1+Uk0A joR7+L6eDYCNDILAYc0haAQ5X2DHpdfsqR7DVloFOPwD5S1ChRlNIcDxWSb0F1PEBq0cBoPxYXwAZ 6LebI9CeXxsZOaVhD7jdiHj1xCJwzWbG/2iE4J77CvpOf7D23vR5BHPix+3FGfiUz4LcyJXKz5hPN ydydlaAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1stiRc-00000007Q1y-08lJ; Thu, 26 Sep 2024 06:57:44 +0000 Received: from mail-qt1-x82a.google.com ([2607:f8b0:4864:20::82a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1stiHv-00000007NWS-0zMP for linux-arm-kernel@lists.infradead.org; Thu, 26 Sep 2024 06:47:44 +0000 Received: by mail-qt1-x82a.google.com with SMTP id d75a77b69052e-45816db2939so4046201cf.3 for ; Wed, 25 Sep 2024 23:47:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1727333262; x=1727938062; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ffdIJrLHJ9HLo1zmy1fROVnOG/sDLfcmik2lSMoQorU=; b=BqvzdPRwjq8A64Ab2GnJzuF2l8VEIgPoqVRPVupOH4fZ/f2NIF4iqS++kQZnI7dpak 6BIyPuWoJYWfzfpFzUvMXecGcqcrQosTef7Jz7P/WVV0ECZPCmqV94iGsKiBD5+0O5Ja 3FnTtIcWjlb2qlU6Wcl663i3/IFYGqMUZN1IXjN+hDguvZRQnmjjaGqMAsyQ5HXAGzKD YfmR87U2JxszvTK2q8J9NCzca/FT8RzBrb+F+3lttBfh4KJ4VfVwA1DOCqb4l9Hzs2kz f1c7Ifhe6e/GdhU2x4kxO49zdETiZ0B3KevUk8DTiI9u3qG0th0f6NToAS8DVVj0QwGn FHQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727333262; x=1727938062; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ffdIJrLHJ9HLo1zmy1fROVnOG/sDLfcmik2lSMoQorU=; b=ae/fjtJzQoIkgSy9FMvsniFjq6JEFKbSm6Y/ZJxZIpeqERK02KDl09gY0EOFFmlwtk lvaGUE8AWByZLrOR+F3ALYBTWuYGeuznhyiBqYeJRwccA84B00PVpApfIqN39Fepr5lk Mwm3Fpb2KZa/YfTyeMxtbLxAtoh+L/+pRg1EiIQbP9taK8DGjtSnr53vI1SUGBcoBW75 2TxhNs4wboS0RcU9Bh3KwuQluNgjNePaEjkMogmS2ax9l3YVyU77gHYkkyBHT7YV0xG2 udeMpJZ3DROm468Lkmij6zv6Ng51xqF958Rcmp3XMFbu6Pjf2VT9wbK19ebbNBkLmBCF peaQ== X-Forwarded-Encrypted: i=1; AJvYcCUZaOkf0IdEL+jUkIGHoDP8pdQhP8KFAyJfcAwsO46QNCFcQ8EFYalPEP1AQGUfm31IhdijLtlBKuaf2j9u4JMu@lists.infradead.org X-Gm-Message-State: AOJu0Yz57wm2JXMh9bOlAv7aMCiXkxvhWOdBXeTmljpd3ObgJ2+ndhx4 4J6Akkm02Jdsca15DmknYlv2fAev29aMTpJEhKmA/pKIPLvrhHOcbS3huYMhcWY= X-Google-Smtp-Source: AGHT+IFPPE2WMfZ0ye6LGa3/R8zXQ/6KjVtS14uw6772ox/YEy7cH2DCZDAwEci91jHXPd4jVDRFUg== X-Received: by 2002:a05:622a:1493:b0:457:c747:cf3c with SMTP id d75a77b69052e-45b5e0534ebmr99370671cf.57.1727333261763; Wed, 25 Sep 2024 23:47:41 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-45b5257ff1esm23024611cf.38.2024.09.25.23.47.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Sep 2024 23:47:41 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH v5 07/13] mm: khugepaged: collapse_pte_mapped_thp() use pte_offset_map_rw_nolock() Date: Thu, 26 Sep 2024 14:46:20 +0800 Message-Id: <055e42db68da00ac8ecab94bd2633c7cd965eb1c.1727332572.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240925_234743_309455_CC0CD18A X-CRM114-Status: GOOD ( 13.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In collapse_pte_mapped_thp(), we may modify the pte and pmd entry after acquring the ptl, so convert it to using pte_offset_map_rw_nolock(). At this time, the pte_same() check is not performed after the PTL held. So we should get pgt_pmd and do pmd_same() check after the ptl held. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- mm/khugepaged.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6498721d4783a..854577f39957d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1605,7 +1605,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, if (userfaultfd_armed(vma) && !(vma->vm_flags & VM_SHARED)) pml = pmd_lock(mm, pmd); - start_pte = pte_offset_map_nolock(mm, pmd, haddr, &ptl); + start_pte = pte_offset_map_rw_nolock(mm, pmd, haddr, &pgt_pmd, &ptl); if (!start_pte) /* mmap_lock + page lock should prevent this */ goto abort; if (!pml) @@ -1613,6 +1613,9 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, else if (ptl != pml) spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) + goto abort; + /* step 2: clear page table and adjust rmap */ for (i = 0, addr = haddr, pte = start_pte; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) { @@ -1645,7 +1648,6 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, nr_ptes++; } - pte_unmap(start_pte); if (!pml) spin_unlock(ptl); @@ -1658,14 +1660,19 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, /* step 4: remove empty page table */ if (!pml) { pml = pmd_lock(mm, pmd); - if (ptl != pml) + if (ptl != pml) { spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) { + flush_tlb_mm(mm); + goto unlock; + } + } } pgt_pmd = pmdp_collapse_flush(vma, haddr, pmd); pmdp_get_lockless_sync(); + pte_unmap_unlock(start_pte, ptl); if (ptl != pml) - spin_unlock(ptl); - spin_unlock(pml); + spin_unlock(pml); mmu_notifier_invalidate_range_end(&range); @@ -1685,6 +1692,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, folio_ref_sub(folio, nr_ptes); add_mm_counter(mm, mm_counter_file(folio), -nr_ptes); } +unlock: if (start_pte) pte_unmap_unlock(start_pte, ptl); if (pml && pml != ptl)