From patchwork Wed Aug 21 08:18:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13771013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 33CB3C52D7C for ; Wed, 21 Aug 2024 08:28:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MvwQXaavnjACpbuRjCPShfvJkJjZwFPa7EtFYmq5Q9U=; b=4woG5CY6lhWcQYI533LpVldaAj tGJd9yE3eEEABY8sqOxgQUcZsX/CdmCHEJUfUeBSxRAuSii1557mxinvPWZkikv3Eqbd1BpN3wLnk DH9O9t72c+T9Z1yXFk8Jpmy2iCkGJxlme4UPDdr0Ri0tE+UP8vvg1fdi8qs45D46y3ldZw1PLCjRq DmA+5R+uBiZcN1IgrJZVj+j9//zVI3db1kuPEIdzLS8Nj+9R3xPr2dlUCeCFezfuXiYXOQvD1quPI 5MSVhmV7wcTp+hgXAlGkC1KD7w6ZEpE3Z+VRcE3Rd7caS+OmPRFSZ71GwGu0trbPfV7NkBRbGNvmO Gn7WgWPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgghH-000000083qw-1qgW; Wed, 21 Aug 2024 08:28:03 +0000 Received: from mail-pg1-x530.google.com ([2607:f8b0:4864:20::530]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgga3-0000000828i-3Yiw for linux-arm-kernel@lists.infradead.org; Wed, 21 Aug 2024 08:20:37 +0000 Received: by mail-pg1-x530.google.com with SMTP id 41be03b00d2f7-7c3d8f260easo3925130a12.1 for ; Wed, 21 Aug 2024 01:20:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1724228435; x=1724833235; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MvwQXaavnjACpbuRjCPShfvJkJjZwFPa7EtFYmq5Q9U=; b=Tsm8WWpPWG10osffoodun6sZIFEfXfhCop5SN12CXUyHh73miRWxykrOkzSmaOaRxW Vt0oE+dYicNF6+LCeGFmZSLIGmh3mtECM/OqBBfA99etSzkz6TXPNCbiWw2T/jaKAW1t 43P7BFxieJEKeFDQ7x6loiBJk9U2MtWstpafeGYM/Hsrk+887XU7l0+K/K/DsYXs+6c+ oSBOhMRyfYbncFTSANM+zNyJ7xWxgJ3+P4/fUh9PKfQ2QGaCOTOKRq+FEGm0RXK7rLe0 Vdi8EeTTTIaa4Lob1dgZlks2X5jGx0JRj3L5nR4CwfPShvn+qA8qCTPAxPsAat3vE9Zv +q3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724228435; x=1724833235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MvwQXaavnjACpbuRjCPShfvJkJjZwFPa7EtFYmq5Q9U=; b=jhrEgjWiLLo1bY8c+vEVrztqqPtvG96tu75BFW/I8Djc3GIvy1oNxQxUjYD3f/RLXn fb3HF5hiU0cRlhDZRgQTt/S+DI6Fw2uPSXoXQAZBVdbgu4rJAYsBSUs5u2kORgitaB48 z1mMyhYFpC792ZeJEkiP5QN9l7r0p+Pp2hwzKrQ1Q7J753p45vOnFtGL/tpTUIOnMnpZ 88HxQc3yOylsjVd0CCDoWjVhIDhnXmsKzducbXovr62NVIJatxFMykzxj6FRw/YvwZXd hTYMUB1djW859sthWGDKp1HTCiRUyHBQTKP8XKFCkuL2MHfvM/zVog06Xb4eR8Tb2dgO 431g== X-Forwarded-Encrypted: i=1; AJvYcCUf1TwLg5VTYgRUSVFzcGZroPNw2KL113fpgUlDI/3luctPuZOO80VNJfF52znZcu3PLQPZ2O9bhrIdnD2u5JHq@lists.infradead.org X-Gm-Message-State: AOJu0YxjC+iYFhNoKMzftg+3WFmycP/RcVwjOWWhPKpUuT+Dp74Oo4cI TcCFUjTr/yz6wf4oliiPaBNRKAz6I2VKCRTbQYUYfSaDcocytyZi8NhHCGMz69c= X-Google-Smtp-Source: AGHT+IF3P/4155uMvmGYpoowuXrhy2v6aHwudYM9PJFMCLkX9j2rCBHO6+Z7X1wkaLdhxTlExnQxpQ== X-Received: by 2002:a17:90b:3b41:b0:2d3:b5ca:dedf with SMTP id 98e67ed59e1d1-2d5e9a0db5cmr1626393a91.17.1724228434854; Wed, 21 Aug 2024 01:20:34 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2d5eb9049b0sm1091453a91.17.2024.08.21.01.20.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Aug 2024 01:20:34 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH 10/14] mm: page_vma_mapped_walk: map_pte() use pte_offset_map_maywrite_nolock() Date: Wed, 21 Aug 2024 16:18:53 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240821_012035_921539_73F7C8C2 X-CRM114-Status: GOOD ( 14.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In the caller of map_pte(), we may modify the pvmw->pte after acquiring the pvmw->ptl, so convert it to using pte_offset_map_maywrite_nolock(). At this time, the write lock of mmap_lock is not held, and the pte_same() check is not performed after the pvmw->ptl held, so we should get pmdval and do pmd_same() check to ensure the stability of pvmw->pmd. Signed-off-by: Qi Zheng --- mm/page_vma_mapped.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42aa2087..da806f3a5047d 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -13,9 +13,11 @@ static inline bool not_found(struct page_vma_mapped_walk *pvmw) return false; } -static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) +static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp, + spinlock_t **ptlp) { pte_t ptent; + pmd_t pmdval; if (pvmw->flags & PVMW_SYNC) { /* Use the stricter lookup */ @@ -25,6 +27,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) return !!pvmw->pte; } +again: /* * It is important to return the ptl corresponding to pte, * in case *pvmw->pmd changes underneath us; so we need to @@ -32,10 +35,11 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) * proceeds to loop over next ptes, and finds a match later. * Though, in most cases, page lock already protects this. */ - pvmw->pte = pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd, - pvmw->address, ptlp); + pvmw->pte = pte_offset_map_maywrite_nolock(pvmw->vma->vm_mm, pvmw->pmd, + pvmw->address, &pmdval, ptlp); if (!pvmw->pte) return false; + *pmdvalp = pmdval; ptent = ptep_get(pvmw->pte); @@ -69,6 +73,12 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) } pvmw->ptl = *ptlp; spin_lock(pvmw->ptl); + + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pvmw->pmd)))) { + spin_unlock(pvmw->ptl); + goto again; + } + return true; } @@ -278,7 +288,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) step_forward(pvmw, PMD_SIZE); continue; } - if (!map_pte(pvmw, &ptl)) { + if (!map_pte(pvmw, &pmde, &ptl)) { if (!pvmw->pte) goto restart; goto next_pte; @@ -307,6 +317,12 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (!pvmw->ptl) { pvmw->ptl = ptl; spin_lock(pvmw->ptl); + if (unlikely(!pmd_same(pmde, pmdp_get_lockless(pvmw->pmd)))) { + pte_unmap_unlock(pvmw->pte, pvmw->ptl); + pvmw->ptl = NULL; + pvmw->pte = NULL; + goto restart; + } } goto this_pte; } while (pvmw->address < end);