From patchwork Wed Sep 4 08:40:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13790205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5DBF0CA0ED3 for ; Wed, 4 Sep 2024 09:18:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=TUS6V5hps9BhpBteFi/blbp/e+0aHtnhPyo5Zqj4Bag=; b=Q9ttmcJ+6lS7eFmcRYVGbG8DhS S90EU7MfiWAkH8MTyrret/ltVwJ3R+NRo3nDPDqlmO84NnAkEHpz2DdlKYcjh2Isj4XId2IxQ1Xlw XbpT4w/Z1Hh4EyX4Jlx4hOp5U/HsDwy0DNMQBcHiCcJBcWPDxbVvpda9LV4s46kZbcXAKRNG+e+He SlEzZBHye3sIGboFo1wZUvbjTkcWO8ixUS02gypVs6KWBKTievAqbsBF1XgneLfDXqsK7lG+lix0k WnOd4gQsSt7WO0BPlEaambI5rgSPsZ/Sl/fAsamh/4Xb+lu2HQ1iJ79x6Q0JQaEPT/WE0mTWBOVF7 zvc/ahEA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1slm9f-00000003dSw-2CNl; Wed, 04 Sep 2024 09:18:23 +0000 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sllac-00000003Tjj-2BCN for linux-arm-kernel@lists.infradead.org; Wed, 04 Sep 2024 08:42:12 +0000 Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-2021c08b95cso4030265ad.0 for ; Wed, 04 Sep 2024 01:42:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1725439330; x=1726044130; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TUS6V5hps9BhpBteFi/blbp/e+0aHtnhPyo5Zqj4Bag=; b=k5GPpuBtSvFTaR8U7eSkpHqu1s+pa32VqOKLIiXdf7kqL8DhqmCErcUs2fBHnqSj8/ d50mthPa9eiwINZtzKXlz3nqHv7W3yQ98eTwOwdwMlLWGfxPm4gP5H2jGpOTpir0z/rf q03dD3d0JgzqgvlMkh0RJqpfCuWEbsivV6SCZNf3QQMQAKLoXVI9kViXWd8V6I67Qn4O yJgcFEp++s/Uw46cHyu3TZrDhwRl7mqR5YBvsqqz6z3DQROas1+bWSR//u4FaZIMAZEV zk+c/wex3wgD+Qm6Okx4cZBzv02I3tBfu4y/czMAA340NbO9TpN2/jMf6gxrAREJdcGw Fjbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725439330; x=1726044130; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TUS6V5hps9BhpBteFi/blbp/e+0aHtnhPyo5Zqj4Bag=; b=KsHslQjSmABfx11h+2GxqZgAGQM68h0S1djRPaEcPnKzF1TkwAybQI4j5dPte1qAca 90SgumixPXfKED1Q+GwZqxNBCEtv8lwE9UPf9k5ws6aMCZSExXZcsHu1qxl10Mz8N7wa A5v5SQVdS1fjIaWTLQUw6D0pmdVznHvVr4sCv+LBj/JeY7BaL4QoLZlIv3/5lPO67SRx N4EougsyJvapo4ny1IAIYHUh02NCa4efQnMNHfsfyYl3CNYkeEaI5pT7O0ty9FStwXzH x2VAZnRSlZMj9X0/cyoKV0BHtBi38bHfQs5xmXHpLNKZ1iVN98PndJ4vHZVjk3QY2Gb2 npNQ== X-Forwarded-Encrypted: i=1; AJvYcCUzDS/DlF8xyI825D21v35nPA5NsuN92aRRwL5n4D/Qeo8hDUQqnXWKpYVJGl9czu+DuPQji3Le1PAOkaKIE6gy@lists.infradead.org X-Gm-Message-State: AOJu0Yx9vlNNLVcYMcf2VqMQfCIqU43ZU1cbrcGWEqgYm9g03ZhEN9aC inyYVbKyin+MJftyZBGM5JfyoOdIjiIRxaXVLOEB+OoFswTTWT1pGdUc77dLfV0= X-Google-Smtp-Source: AGHT+IHV13v46mfi8eJv03VSK3v4h4kaXbg2iDfLcHBwpuweJiwY6TIZiDRC9qw17xCxMuqJ20qctg== X-Received: by 2002:a17:902:ecc9:b0:1fb:7654:4a40 with SMTP id d9443c01a7336-206b833e006mr24427095ad.14.1725439329649; Wed, 04 Sep 2024 01:42:09 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.242]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-206ae95a51csm9414045ad.117.2024.09.04.01.42.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Sep 2024 01:42:09 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH v3 13/14] mm: pgtable: remove pte_offset_map_nolock() Date: Wed, 4 Sep 2024 16:40:21 +0800 Message-Id: <20240904084022.32728-14-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20240904084022.32728-1-zhengqi.arch@bytedance.com> References: <20240904084022.32728-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240904_014210_586935_15243736 X-CRM114-Status: GOOD ( 12.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now no users are using the pte_offset_map_nolock(), remove it. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- Documentation/mm/split_page_table_lock.rst | 3 --- include/linux/mm.h | 2 -- mm/pgtable-generic.c | 21 --------------------- 3 files changed, 26 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index 08d0e706a32db..581446d4a4eba 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -16,9 +16,6 @@ There are helpers to lock/unlock a table and other accessor functions: - pte_offset_map_lock() maps PTE and takes PTE table lock, returns pointer to PTE with pointer to its PTE table lock, or returns NULL if no PTE table; - - pte_offset_map_nolock() - maps PTE, returns pointer to PTE with pointer to its PTE table - lock (not taken), or returns NULL if no PTE table; - pte_offset_map_ro_nolock() maps PTE, returns pointer to PTE with pointer to its PTE table lock (not taken), or returns NULL if no PTE table; diff --git a/include/linux/mm.h b/include/linux/mm.h index 1fde9242231c9..5b5a774902bd6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3004,8 +3004,6 @@ static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, return pte; } -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp); pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp); pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 262b7065a5a2e..c68aa655b7872 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -305,18 +305,6 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) return NULL; } -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) -{ - pmd_t pmdval; - pte_t *pte; - - pte = __pte_offset_map(pmd, addr, &pmdval); - if (likely(pte)) - *ptlp = pte_lockptr(mm, &pmdval); - return pte; -} - pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp) { @@ -374,15 +362,6 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, * and disconnected table. Until pte_unmap(pte) unmaps and rcu_read_unlock()s * afterwards. * - * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); - * but when successful, it also outputs a pointer to the spinlock in ptlp - as - * pte_offset_map_lock() does, but in this case without locking it. This helps - * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time - * act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock - * pointer for the page table that it returns. In principle, the caller should - * recheck *pmd once the lock is taken; in practice, no callsite needs that - - * either the mmap_lock for write, or pte_same() check on contents, is enough. - * * pte_offset_map_ro_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); * but when successful, it also outputs a pointer to the spinlock in ptlp - as * pte_offset_map_lock() does, but in this case without locking it. This helps