From patchwork Tue Sep 24 06:09:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13810167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F3F9CF9C6B for ; Tue, 24 Sep 2024 06:17:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WqTOjUnPHL3XRjrt8I2PlSfGhzwzG7n3dPRatvbgECo=; b=zNIqC0dITdB50MQmM6Wr6xHbuP UkQhY1URfmI/zSW9nsM+rN+c2EUAjXAQLG/8ca6gMcJigYCbou8Hviw4dT4OznQBBcrvyl3ggAmIA Gecnm6/1a20aKxwMSD6NgvH3LA/gMbgtafNyI/2pPtcHk4Ephnm1kYd071K8M6/3mya+a/NV/V7uY EEnl4kYV3w9S4D/LjMFr3U2cO2pyvMtGcJeZMS1CWEZaHFIR8nXEi94DPZRfBU7dbocvj+AQQPDrV xFNLzOWlFURser3RoHBgOhNElqG5ty/NoY6frlacdkFlm0tj0vAKaWAOduKBHFYHnVsZA2LXjRekd vg5xjRKQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1ssyrh-00000001Dg8-0tB3; Tue, 24 Sep 2024 06:17:37 +0000 Received: from mail-pl1-x636.google.com ([2607:f8b0:4864:20::636]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1ssylN-00000001BtD-2iYf for linux-arm-kernel@lists.infradead.org; Tue, 24 Sep 2024 06:11:06 +0000 Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-20536dcc6e9so32524105ad.2 for ; Mon, 23 Sep 2024 23:11:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1727158264; x=1727763064; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WqTOjUnPHL3XRjrt8I2PlSfGhzwzG7n3dPRatvbgECo=; b=j9gLlkp4N4Z+f9AcyOCvvXQNGAAuextRxJEQmFEbyT47/viVgca6+m8pywAu68TOss v7dJNuLm582KsY/u5274WB8I82Bl2lPKMRL+VEIlXqMM49CrJE88i67MQt1jP461DDSa Zp61epLkRrLMIgbWZp59dHjiLfA257dvTHgSV8hEXYA/JUkl45uOFrjfqy/8wU8BnvcD 9IfUj/n3sNrn9Ku5aC5PQ0yGxwz5VCFPa015oBjKStNFrm7Ml7CeblGllWLLCLGMZWeS ezNiw66eTAed7PvhDym/86UcG39LrSPKXod2V8lBgJUX0xRJVMSS9Y9PgfJgh6/SEGJ7 K57A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727158264; x=1727763064; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WqTOjUnPHL3XRjrt8I2PlSfGhzwzG7n3dPRatvbgECo=; b=TwQQ5MYp+lZ3zTKPtixSnCPpTmNIK6kPxVVUPvzmPjOX6NtVX/nk6zuLyM7pteYhK5 GdTfEh5QE310bzcwWWq2iTNf1C8O6J5ZG5fEHIUM33fk8ihdSrQizVV4IMgjJoH/4QQD diF29TU0Ad47yLEPbGy9C4VAFirfid2ofD0SC1BGk5lwRkcKhFT+95WuJeu211lf3MWM HJOGS23PXRV0xgCH0tR7vIxDgMDkGTKs0VvqG28XXjAfwXtlAtqYVGwR59oOra38yNvp U+FTADm3/2r+Gnml3tEp+Rpsa4h0hyGXk1DBJAN1/MKuAX5hcY5joFLYLLhDrhSARk4W /Kug== X-Forwarded-Encrypted: i=1; AJvYcCV/6kKXfa10cIWj7NZKDR5WW71i8R83MJgK8nw0FiTJ1QH8XBolOuzq12grkmq9VKQvo9Hak9Qdwq/EYpN1Kg/H@lists.infradead.org X-Gm-Message-State: AOJu0YxAei4aXdNmTdLreJfPeHjjcGK8s1DL/iLheZrlZcX77+J9wYDo EgTer027hNBQ4LIDy9ezUUnHZrUKC0Rpt9rwbOj46WBIo4t79kv4Da5CyqZo0KM= X-Google-Smtp-Source: AGHT+IGIKI1reF9Jfo+dWimmN1cyjgiGD8qn8mcbfIJc2tOglwlNFzTXQQXYEG1VSSXByHdLorgo7Q== X-Received: by 2002:a17:903:944:b0:205:4e15:54c8 with SMTP id d9443c01a7336-208d8440f7amr165611365ad.61.1727158259613; Mon, 23 Sep 2024 23:10:59 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20af17229c9sm4344885ad.85.2024.09.23.23.10.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Sep 2024 23:10:59 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH v4 05/13] arm: adjust_pte() use pte_offset_map_rw_nolock() Date: Tue, 24 Sep 2024 14:09:57 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240923_231105_696115_AADBEBEC X-CRM114-Status: GOOD ( 20.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In do_adjust_pte(), we may modify the pte entry. The corresponding pmd entry may have been modified concurrently. Therefore, in order to ensure the stability if pmd entry, use pte_offset_map_rw_nolock() to replace pte_offset_map_nolock(), and do pmd_same() check after holding the PTL. All callers of update_mmu_cache_range() hold the vmf->ptl, so we can determined whether split PTE locks is being used by doing the following, just as we do elsewhere in the kernel. ptl != vmf->ptl And then we can delete the do_pte_lock() and do_pte_unlock(). Signed-off-by: Qi Zheng Acked-by: David Hildenbrand Reviewed-by: Muchun Song --- Hi David and Muchun, I did not remove your Acked-by and Reviewed-by tags since there is no functional change. arch/arm/mm/fault-armv.c | 53 +++++++++++++++++----------------------- 1 file changed, 22 insertions(+), 31 deletions(-) diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 831793cd6ff94..2bec87c3327d2 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -61,32 +61,8 @@ static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address, return ret; } -#if defined(CONFIG_SPLIT_PTE_PTLOCKS) -/* - * If we are using split PTE locks, then we need to take the page - * lock here. Otherwise we are using shared mm->page_table_lock - * which is already locked, thus cannot take it. - */ -static inline void do_pte_lock(spinlock_t *ptl) -{ - /* - * Use nested version here to indicate that we are already - * holding one similar spinlock. - */ - spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); -} - -static inline void do_pte_unlock(spinlock_t *ptl) -{ - spin_unlock(ptl); -} -#else /* !defined(CONFIG_SPLIT_PTE_PTLOCKS) */ -static inline void do_pte_lock(spinlock_t *ptl) {} -static inline void do_pte_unlock(spinlock_t *ptl) {} -#endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */ - static int adjust_pte(struct vm_area_struct *vma, unsigned long address, - unsigned long pfn) + unsigned long pfn, struct vm_fault *vmf) { spinlock_t *ptl; pgd_t *pgd; @@ -94,6 +70,7 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address, pud_t *pud; pmd_t *pmd; pte_t *pte; + pmd_t pmdval; int ret; pgd = pgd_offset(vma->vm_mm, address); @@ -112,20 +89,33 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address, if (pmd_none_or_clear_bad(pmd)) return 0; +again: /* * This is called while another page table is mapped, so we * must use the nested version. This also means we need to * open-code the spin-locking. */ - pte = pte_offset_map_nolock(vma->vm_mm, pmd, address, &ptl); + pte = pte_offset_map_rw_nolock(vma->vm_mm, pmd, address, &pmdval, &ptl); if (!pte) return 0; - do_pte_lock(ptl); + /* + * If we are using split PTE locks, then we need to take the page + * lock here. Otherwise we are using shared mm->page_table_lock + * which is already locked, thus cannot take it. + */ + if (ptl != vmf->ptl) { + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) { + pte_unmap_unlock(pte, ptl); + goto again; + } + } ret = do_adjust_pte(vma, address, pfn, pte); - do_pte_unlock(ptl); + if (ptl != vmf->ptl) + spin_unlock(ptl); pte_unmap(pte); return ret; @@ -133,7 +123,8 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address, static void make_coherent(struct address_space *mapping, struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep, unsigned long pfn) + unsigned long addr, pte_t *ptep, unsigned long pfn, + struct vm_fault *vmf) { struct mm_struct *mm = vma->vm_mm; struct vm_area_struct *mpnt; @@ -160,7 +151,7 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, if (!(mpnt->vm_flags & VM_MAYSHARE)) continue; offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - aliases += adjust_pte(mpnt, mpnt->vm_start + offset, pfn); + aliases += adjust_pte(mpnt, mpnt->vm_start + offset, pfn, vmf); } flush_dcache_mmap_unlock(mapping); if (aliases) @@ -203,7 +194,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) - make_coherent(mapping, vma, addr, ptep, pfn); + make_coherent(mapping, vma, addr, ptep, pfn, vmf); else if (vma->vm_flags & VM_EXEC) __flush_icache_all(); }