From patchwork Tue Sep 24 06:10:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13810181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C625CF9C71 for ; Tue, 24 Sep 2024 06:26:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=e0T9lvI1pcCPfr6v8oRbivQAnqkZ5MJutHJ+/bNj+/g=; b=u7dLhOQds/RhTCFU2mPmNETZyb bgSd1XUy8lQ3XEbmNDtU1uuQUSFp47SZ1KIP3xtotDHBFcYsJ2mtF6oMYU0ozpA7RQmPHLazizFFw V77o3fKrxfE+Livij4bZ+DSnNApWVj78IhKP+FuBgi3SpCu0z7LdBAeFI34nMwUq6cl784FEGr5FG 1LgFCgXExFb2qtc98aIua6GdEPZwJeMvq8UCiSOBggbHoqjl5EUIHGBGenaOagp3vJkW+bVJRn3Bg Mz/4i9MMPaasdkajz7qNw27q7icgvSlKYIyaakiacyz/iRV3b/7NAepa+Xr8PXfjR412lYo7AHtM3 SrT6gV6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1ssz0Z-00000001FP3-0ggw; Tue, 24 Sep 2024 06:26:47 +0000 Received: from mail-pl1-x635.google.com ([2607:f8b0:4864:20::635]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1ssym5-00000001CI0-26Du for linux-arm-kernel@lists.infradead.org; Tue, 24 Sep 2024 06:11:51 +0000 Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-2053525bd90so47138375ad.0 for ; Mon, 23 Sep 2024 23:11:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1727158309; x=1727763109; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e0T9lvI1pcCPfr6v8oRbivQAnqkZ5MJutHJ+/bNj+/g=; b=T0+2FsvZ2LFT8azDcXZEbzM6nR8qFOF+Trh4qXio0rdlYNhr0nnDes/OXS8IZ/QfaJ /l3OGUm4dvYKWzJ2vSWJ3WrlJlkKMlgqcCFpSTKTfCHtd10gxSbWBzIpX3uu0V/oeXfw SsR1/a76n7uOaz2cutIWZ/BDuQP+HboGRrn06enMzOkf6LCN9Lb8Jq3Hyfn0uprPho6Z picGsXy0eTHEJMAO0v0Yhhf2U1DBhXeDsf7OIe/BvKweSIW8DRqpWLOXH7MaUQlbFNtg 3e0mf44MwCaEZzBOFgey8Ym27Ijl5qTphnheq1yf+PkrjViSKcbqRmzyMN96EG5KL0e4 GhNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727158309; x=1727763109; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e0T9lvI1pcCPfr6v8oRbivQAnqkZ5MJutHJ+/bNj+/g=; b=dqE3X92syS+bW5eMoPPUnD2ZlCFKVm67+wdLkME0tHjG+qVOcr7TBxM+2frOs5mFUd 8EeSC6Xa8mb2R9gljtd9miFY/Tg74+CpmPEWT4Ai0pmsDICi2HpmeqVDBck0wvCYQp9p hQ1jWlmHSwUtVYI5GAa6Io7OG6vbr9XqS5HLI2jDDZ/iD/Lwyum+vCZzs+cr03wc3jFd pdJLc6w6Vt3rB/Lu2tBNvWGXEwIQJAgihWGcVauCKog/MxJr/QkOngSBF+HS544pTw+P ENOW60Hv6ifv5BR0RfmLgRMX2ecdaLtpS5ROQgb1Lkq66D2sh+kmxAbwTBoG4jpkLx1R hDhw== X-Forwarded-Encrypted: i=1; AJvYcCWoSF5xydJcYJtlkZAKRggGQo37T2yvkITHQj/M09sdYa2KXHPBrssLAJG2Gsf9PjSFo1ar8Esn9cJclf4lNN6Z@lists.infradead.org X-Gm-Message-State: AOJu0YyuzNTfHupup2pdEc9FERr3QGcEFQ1mSK1/deANY5UjKsFydrDf zn3HIVAgM1VN3vx68jMAgKOKfJKM4UByc3lS678N3XBf7Hw8Y3Mg51EFsEX1RsM= X-Google-Smtp-Source: AGHT+IGkJQOlzg3g752NhI9RquMqodCfvTZ7obYXRPzPmN4sDO5P6Za1vWoZJJ9nTCdoHXRNF+rvtA== X-Received: by 2002:a17:902:ec8a:b0:202:311c:1a59 with SMTP id d9443c01a7336-208d83b6ce8mr195135205ad.27.1727158308785; Mon, 23 Sep 2024 23:11:48 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20af17229c9sm4344885ad.85.2024.09.23.23.11.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Sep 2024 23:11:48 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH v4 13/13] mm: pgtable: remove pte_offset_map_nolock() Date: Tue, 24 Sep 2024 14:10:05 +0800 Message-Id: <8eb7fcecf9ed8268980d0bd040c0a4f349cbca8f.1727148662.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240923_231149_702289_5E7CF715 X-CRM114-Status: GOOD ( 12.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now no users are using the pte_offset_map_nolock(), remove it. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song Acked-by: David Hildenbrand --- Documentation/mm/split_page_table_lock.rst | 3 --- include/linux/mm.h | 2 -- mm/pgtable-generic.c | 21 --------------------- 3 files changed, 26 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index 08d0e706a32db..581446d4a4eba 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -16,9 +16,6 @@ There are helpers to lock/unlock a table and other accessor functions: - pte_offset_map_lock() maps PTE and takes PTE table lock, returns pointer to PTE with pointer to its PTE table lock, or returns NULL if no PTE table; - - pte_offset_map_nolock() - maps PTE, returns pointer to PTE with pointer to its PTE table - lock (not taken), or returns NULL if no PTE table; - pte_offset_map_ro_nolock() maps PTE, returns pointer to PTE with pointer to its PTE table lock (not taken), or returns NULL if no PTE table; diff --git a/include/linux/mm.h b/include/linux/mm.h index 9a4550cd830c9..e2a4502ab019b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3015,8 +3015,6 @@ static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, return pte; } -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp); pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp); pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 262b7065a5a2e..c68aa655b7872 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -305,18 +305,6 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) return NULL; } -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) -{ - pmd_t pmdval; - pte_t *pte; - - pte = __pte_offset_map(pmd, addr, &pmdval); - if (likely(pte)) - *ptlp = pte_lockptr(mm, &pmdval); - return pte; -} - pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp) { @@ -374,15 +362,6 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, * and disconnected table. Until pte_unmap(pte) unmaps and rcu_read_unlock()s * afterwards. * - * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); - * but when successful, it also outputs a pointer to the spinlock in ptlp - as - * pte_offset_map_lock() does, but in this case without locking it. This helps - * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time - * act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock - * pointer for the page table that it returns. In principle, the caller should - * recheck *pmd once the lock is taken; in practice, no callsite needs that - - * either the mmap_lock for write, or pte_same() check on contents, is enough. - * * pte_offset_map_ro_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); * but when successful, it also outputs a pointer to the spinlock in ptlp - as * pte_offset_map_lock() does, but in this case without locking it. This helps