From patchwork Tue Sep 24 06:10:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13810173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31702CF9C71 for ; Tue, 24 Sep 2024 06:22:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FxMRr8GFLSu76sONoxW9925XRbgCGcQv9BKiL/bcjLY=; b=I0CwQIZ1DB3q9hycfQeza/LF8I xRXDGomeRACb4HKLjZk+8YlQWs4E3/47r9zl7ndOHkxFC6CAU+UOFZrolFRuqu+NMrQiPG5PYsod3 lESwr/6XD1Ixjeh+ulja8GPyy2kcdk+HuUlQNZ+eVdqMYcyrR3SschSI1vE6Kssnmfm/vh5bcoDyJ 8wLjtOizECf6Ilj81jHSiapfKZqY30mtXQ+sMzWo+Bnc/aaykehr+AwTW+4NU6KK+o6Yf85MeCvSe 01m+jDNZnQ34Fa/5zdFiHr3DKajAD7P+yJC7PWKakp7bHQzOrdG2eStAHj8STlvwiMZ5Llysg62Y7 vdTMPcRA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1ssyw8-00000001EhH-45dE; Tue, 24 Sep 2024 06:22:13 +0000 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1ssyln-00000001C7y-0kM9 for linux-arm-kernel@lists.infradead.org; Tue, 24 Sep 2024 06:11:32 +0000 Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-2055136b612so61987335ad.0 for ; Mon, 23 Sep 2024 23:11:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1727158290; x=1727763090; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FxMRr8GFLSu76sONoxW9925XRbgCGcQv9BKiL/bcjLY=; b=NWjsdsSkynf+kBFnvA15h5j4t+i9DHo8+afv1bLRr3oNlq2+hE4c81dBkT1gmF7yN2 8BCP8uPqbmFTkE01gbR7V4/Lb/n4TiQ2GZQx4Y5w0j1rLDXBCFmj2GaniefGkMlk2sQc VtgOi+Nr304s6HwYhiiTCiK8N4YD4zu0K+uUCF8TyZS+ic6enpfo7k04SaPgWBSD2K+s TGQ6ZMd82UgHonk4JAJ2ZFFwRUzpLvSnBf2h1jotu5H3011GLnHQVqCycLQviZocCyu+ 7EEjIgbt3Z2gwrPzUWTx9kMgXErxnjt9UF7j+R02oFRd3oYbrSaFY6AB8dKrVRunkf/y dEMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727158290; x=1727763090; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FxMRr8GFLSu76sONoxW9925XRbgCGcQv9BKiL/bcjLY=; b=nxqGp/jIUUAdtnuKlcf0SFk0dO30RyK3zfFS8ikDxIMpYDtPwaXjjEHMrGAGzYb3Rv xNlt/GTTSB8PqXXKkaZ063iWcphcfRRPSrkJGyipiE4H/HARviYOS2hCCVhRNCdAXv47 1pDrC9bS8BTonNd/yZPCt5Yfeb3I73MppraUAxwhHZMdW/gs5RIxYVqSo56o60QC9yb+ csRL1sU4P5RetIzeURR1naXvGxdgZeWZ6Skj5mqtB6dzPM5ENX9Juucq6SKEKEajiwhY QQV9HQyKJnaPvEFtV5SAcGtKjjt6KmtZE/5cFeXlLPPnUgDctoXeDZro+/fjCJTVe9k+ XbyQ== X-Forwarded-Encrypted: i=1; AJvYcCXpGJqyqcNVsVJ1KmrCvsJlkOqu3aAmGC2VPcdP12oK4318F1PjTLeB7jCYTXf3mmgbd6cAwwiox4yJIvKGxnih@lists.infradead.org X-Gm-Message-State: AOJu0YzZUwhWn7duzsdu9byPHVcRJBLUyVB8RSJnIU4Zs4oxLEByB1Ve elCWH9zT440tGtjz0MhxmuhYnEB8JNEAjywZC21d2USkQVNDYJqj0WVAo7ATZAQ= X-Google-Smtp-Source: AGHT+IGuU8VFNPNElLsd/e1qu4NWf2AelC2dd7GFKvCMDCyRlgITelf6+XB9J1CmvT70UykrnIRqmQ== X-Received: by 2002:a17:902:f542:b0:201:f70a:7492 with SMTP id d9443c01a7336-208d98603b2mr202721885ad.53.1727158290456; Mon, 23 Sep 2024 23:11:30 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20af17229c9sm4344885ad.85.2024.09.23.23.11.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Sep 2024 23:11:30 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH v4 10/13] mm: page_vma_mapped_walk: map_pte() use pte_offset_map_rw_nolock() Date: Tue, 24 Sep 2024 14:10:02 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240923_231131_239721_CD419105 X-CRM114-Status: GOOD ( 14.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In the caller of map_pte(), we may modify the pvmw->pte after acquiring the pvmw->ptl, so convert it to using pte_offset_map_rw_nolock(). At this time, the pte_same() check is not performed after the pvmw->ptl held, so we should get pmdval and do pmd_same() check to ensure the stability of pvmw->pmd. Signed-off-by: Qi Zheng --- mm/page_vma_mapped.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42aa2087..6410f29b37c1b 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -13,9 +13,11 @@ static inline bool not_found(struct page_vma_mapped_walk *pvmw) return false; } -static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) +static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp, + spinlock_t **ptlp) { pte_t ptent; + pmd_t pmdval; if (pvmw->flags & PVMW_SYNC) { /* Use the stricter lookup */ @@ -25,6 +27,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) return !!pvmw->pte; } +again: /* * It is important to return the ptl corresponding to pte, * in case *pvmw->pmd changes underneath us; so we need to @@ -32,10 +35,11 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) * proceeds to loop over next ptes, and finds a match later. * Though, in most cases, page lock already protects this. */ - pvmw->pte = pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd, - pvmw->address, ptlp); + pvmw->pte = pte_offset_map_rw_nolock(pvmw->vma->vm_mm, pvmw->pmd, + pvmw->address, &pmdval, ptlp); if (!pvmw->pte) return false; + *pmdvalp = pmdval; ptent = ptep_get(pvmw->pte); @@ -67,8 +71,13 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) } else if (!pte_present(ptent)) { return false; } + spin_lock(*ptlp); + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pvmw->pmd)))) { + pte_unmap_unlock(pvmw->pte, *ptlp); + goto again; + } pvmw->ptl = *ptlp; - spin_lock(pvmw->ptl); + return true; } @@ -278,7 +287,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) step_forward(pvmw, PMD_SIZE); continue; } - if (!map_pte(pvmw, &ptl)) { + if (!map_pte(pvmw, &pmde, &ptl)) { if (!pvmw->pte) goto restart; goto next_pte; @@ -307,6 +316,12 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (!pvmw->ptl) { pvmw->ptl = ptl; spin_lock(pvmw->ptl); + if (unlikely(!pmd_same(pmde, pmdp_get_lockless(pvmw->pmd)))) { + pte_unmap_unlock(pvmw->pte, pvmw->ptl); + pvmw->ptl = NULL; + pvmw->pte = NULL; + goto restart; + } } goto this_pte; } while (pvmw->address < end);