From patchwork Wed Sep 4 08:40:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13790200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A5E3CA0ED3 for ; Wed, 4 Sep 2024 09:15:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=DklgHNcyYpWTok/ex913aKZBAOiPUnESXBkqa0TTMVk=; b=0sjhqII0PJUqv1umtQHfoGGydQ 6cfQErxIO0fEtmRwmA5r46KPRLjK/Dl8uNLtcoMLgOZWNwXLbv4PfSxD068Zv5fYA9RtikoCqFpBD KEAOGI+WqhCgNraMX7Z3oSQ9o9DLfZ6YxO6g5IhWR0ickDGQIXtdWX53WYjg2i/fvWV853F1PNCUk zrURJml6+vidOCVsXBrppuUChGoQglaZzwNBg+whGY2mDT1JGQuv108qLv053ZZ0j2KSs3oI5CtYu R+t85ppH6G5ElRmQ9NCvIK+Jxm0A65za3qGMnMiHYp1pUBlLxaxHICqORKzLrP9H6GUlW2M2rSpDQ gMvpOIOQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1slm6y-00000003d4y-1FZT; Wed, 04 Sep 2024 09:15:36 +0000 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sllaH-00000003TW1-3o9g for linux-arm-kernel@lists.infradead.org; Wed, 04 Sep 2024 08:41:51 +0000 Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-2055a3f80a4so23925995ad.2 for ; Wed, 04 Sep 2024 01:41:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1725439309; x=1726044109; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DklgHNcyYpWTok/ex913aKZBAOiPUnESXBkqa0TTMVk=; b=dKuFGnePNJ63n3Dqef77nssW2ShbkJs38yE9GBNGyXROw+AFxQkaPBrXE68uK6HGne SDdWMHgAIKuYtrTNJIQmMHBq8ccSEuMZLgVXeS71P4ERK/WhRC80jFOoTLxtaHJ2XYH+ 0Rz8+4KqmFlihKt6tS4ZLAAw/KVjBdHEpHuzzeu3I7cwWcheLEG2OWc59L1tGiSZ7O3R fepDHj34xPzIYh+Hm4i1WL0aCtK/IRp6APgz2kk4iTxr/QrS0Wh9sn5qYyhFaMOIbmLs SG77uZ6Xw2O4vz2eMWqvq1cOwojCezDGLOY96X9M1Y5bHy1TAHzoBcyAU7gFTwEsLagj 38cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725439309; x=1726044109; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DklgHNcyYpWTok/ex913aKZBAOiPUnESXBkqa0TTMVk=; b=VUeMNtmdjCQfutadMohSy2qz4GScj6xDemEvO+ctqEnJ2ZzJBYstkQcAkGjc5Mx8bX f52ETy/kvX93XuYQD5CmWkrLP553TUDEMME68R+T3GBgY+6YZtr0KThN8S6o+GrMFgfo 9lSr51z4vY+IlPHku87LmQ/0l65dIjvhC2ODJ5er+AIz3Gd4MUFSio2syoxXC8DBJ/oR SoCPr1zvJ7vh3v3QAcV3ZjXVc6feE62NbxGP7Y4NLm/2XIE1thqi+6TpnL7uQGBwqxe1 3nH0zq6D0TJwa+LYOWZbboUb8iTPsYZ2aKNygmEwjuOKuvgSXr6thz/WpsXR5vBl2Jql +Iaw== X-Forwarded-Encrypted: i=1; AJvYcCUxUiKeje37otKlXpCW4+9h8ztCCB3H5M3JCO0f3GWpNCFZvW0Vgo/h0Jb6sspGn5asgCh5mR1aq1ffZsHfvKnc@lists.infradead.org X-Gm-Message-State: AOJu0YwLYw08QFG+aUXhJteU1aK/0N2QwqjskH8FlzaX97XO+siY9n9E DLgKCju7Gx6Kr4SFsJTtgPH5m6EOrFhpx7igC2a+18/r3Y9JLnrM/63eQ8jJXiSpuWktWx9whjC j X-Google-Smtp-Source: AGHT+IErn+gfx0jNqN8oYReDzi5ALFGdG/LuObb3FXvUpU+trUkKEzs/YdM2MbdzLHq+FDgFhXw6+A== X-Received: by 2002:a17:902:d50a:b0:205:6c25:fe12 with SMTP id d9443c01a7336-2056c25ffb5mr96453155ad.34.1725439309345; Wed, 04 Sep 2024 01:41:49 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.242]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-206ae95a51csm9414045ad.117.2024.09.04.01.41.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Sep 2024 01:41:49 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH v3 10/14] mm: page_vma_mapped_walk: map_pte() use pte_offset_map_rw_nolock() Date: Wed, 4 Sep 2024 16:40:18 +0800 Message-Id: <20240904084022.32728-11-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20240904084022.32728-1-zhengqi.arch@bytedance.com> References: <20240904084022.32728-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240904_014149_984490_57B006CB X-CRM114-Status: GOOD ( 14.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In the caller of map_pte(), we may modify the pvmw->pte after acquiring the pvmw->ptl, so convert it to using pte_offset_map_rw_nolock(). At this time, the pte_same() check is not performed after the pvmw->ptl held, so we should get pmdval and do pmd_same() check to ensure the stability of pvmw->pmd. Signed-off-by: Qi Zheng --- mm/page_vma_mapped.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42aa2087..f1d73fd448708 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -13,9 +13,11 @@ static inline bool not_found(struct page_vma_mapped_walk *pvmw) return false; } -static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) +static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp, + spinlock_t **ptlp) { pte_t ptent; + pmd_t pmdval; if (pvmw->flags & PVMW_SYNC) { /* Use the stricter lookup */ @@ -25,6 +27,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) return !!pvmw->pte; } +again: /* * It is important to return the ptl corresponding to pte, * in case *pvmw->pmd changes underneath us; so we need to @@ -32,10 +35,11 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) * proceeds to loop over next ptes, and finds a match later. * Though, in most cases, page lock already protects this. */ - pvmw->pte = pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd, - pvmw->address, ptlp); + pvmw->pte = pte_offset_map_rw_nolock(pvmw->vma->vm_mm, pvmw->pmd, + pvmw->address, &pmdval, ptlp); if (!pvmw->pte) return false; + *pmdvalp = pmdval; ptent = ptep_get(pvmw->pte); @@ -69,6 +73,12 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) } pvmw->ptl = *ptlp; spin_lock(pvmw->ptl); + + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pvmw->pmd)))) { + spin_unlock(pvmw->ptl); + goto again; + } + return true; } @@ -278,7 +288,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) step_forward(pvmw, PMD_SIZE); continue; } - if (!map_pte(pvmw, &ptl)) { + if (!map_pte(pvmw, &pmde, &ptl)) { if (!pvmw->pte) goto restart; goto next_pte; @@ -307,6 +317,12 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (!pvmw->ptl) { pvmw->ptl = ptl; spin_lock(pvmw->ptl); + if (unlikely(!pmd_same(pmde, pmdp_get_lockless(pvmw->pmd)))) { + pte_unmap_unlock(pvmw->pte, pvmw->ptl); + pvmw->ptl = NULL; + pvmw->pte = NULL; + goto restart; + } } goto this_pte; } while (pvmw->address < end);