From patchwork Tue Sep 24 06:09:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13810124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A022CF9C72 for ; Tue, 24 Sep 2024 06:10:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D56A6B0085; Tue, 24 Sep 2024 02:10:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 868116B0088; Tue, 24 Sep 2024 02:10:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 688DD6B0089; Tue, 24 Sep 2024 02:10:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4978A6B0085 for ; Tue, 24 Sep 2024 02:10:38 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BC5DF8182E for ; Tue, 24 Sep 2024 06:10:37 +0000 (UTC) X-FDA: 82598607714.03.0C23487 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf07.hostedemail.com (Postfix) with ESMTP id E2DBF4000B for ; Tue, 24 Sep 2024 06:10:35 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=eEUo1TDT; spf=pass (imf07.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727158118; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wFV2Gg1HPn9r0uTPIEIMJioMCNlgr9zWJ7S9CBl14ro=; b=0OCzCEIJVfl6xhpgVuhUN4Mr5kvhE9+5nxnWoaafPb0bw+QhH3kZoGeJxSZsQGAQLAnSKf 6d5A3yocJogQL8OkLwAWSB4fp7OXXN1Wqe1vdVHlaVKs8WIHJnMyqXZeRX1uNtPtlEclp5 cjyf/eUZXaC8bgnS8DlKHWZCaFg9X18= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727158118; a=rsa-sha256; cv=none; b=GUf0ta8rq2X0pKxbGyeBCsiNedo6EIZmuoWyrdeD0sW601msHhImD/VFbDLr+52pYvOq6/ rmoM+vX1lCXd64ElVJYh+IHfoEYNe8cCFluYGv3zttG2NzfiYHGqQT26jRNAGEtmyksKrq aIFGSJ2RgXGUdJ1TNHazDIpRKS/ltt4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=eEUo1TDT; spf=pass (imf07.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-20573eb852aso38676125ad.1 for ; Mon, 23 Sep 2024 23:10:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1727158234; x=1727763034; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wFV2Gg1HPn9r0uTPIEIMJioMCNlgr9zWJ7S9CBl14ro=; b=eEUo1TDT0YsEGsSimwsvavkTReiN7JFNNojmwYHjGqRTjTdQmF+NveLOPKzpowu6O0 9nhIDWtZ6fFZGEbcxzqm/j0c86B+NoyOri3l7cAHTFq5U0JfUw6yEwdy7jWKDhWA3q32 wWEDLGHOWi4kmDv9lGdJIXAobjQ9nYs9ecBuUXE5Ryssmkv3LRNmcT73wkD8ekzEPCc7 6viLIjOmUloM+itxyg42MdrX9D/fZgcaPKhywVNXQZwhidffwJ0KbU16KfnwlUqOC/mH XqyHSLjS3MkVQ19TAWWgQp4KR0aMoqcKdOhlIKoYj5ukP6xcXwW7ehTv+cRDc8+NX2Jy vArQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727158234; x=1727763034; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wFV2Gg1HPn9r0uTPIEIMJioMCNlgr9zWJ7S9CBl14ro=; b=B6t87E/8TuZVDe8mdH7ELgFAXFb2a5GfyRqPahEnY2VUvwXJ/Tuv+cuKWCphFQgqTQ hI33KsSk/6SAAvtpJAYQEyqIZFdDqJAaMVcgMVbEMzqOrCvU0i3k1iREXQnEONlljvFy /6uVmnhqhGFwBoEcLcNl9pqnal/NP6oT+trCGpaHR4so80ej394CzXorYEdE8yqNtOIc v7nn34E00ksKaXQnM8CWG2iA3maB0U8TV2UQKvMxZik2E0XryD40lMt0tcu27jVKxN4e spaBoYXqni5MHy/+zfWNXXVQWLlpyo1bxAyGtZrzxYqf12ZuiqX8ojRdCDIBWbpUc11Q dXng== X-Forwarded-Encrypted: i=1; AJvYcCUOsv4WlzxxPzLCibRsiqr9u//Zh9bhWjGJ4/gm22JyaWnrMt667quoLKygSRdO/S5zB5DqzUsp4A==@kvack.org X-Gm-Message-State: AOJu0Yw/8j0az2/wXAyTUZjxqY5a84yF+rlqKrIbGu0iwDtShLKO7gSD LexIyqsVUqZWjazpCfHxJLGuvNg9IUxeeL8l2TjVD3Q69U7urmqb4QSramJ2TxU= X-Google-Smtp-Source: AGHT+IFurxEbSfxWnkgUvSmQpfQ7iOb3of4fLdIGxM8iu8OEamvgIzhTlXEYzcUx/o7HLvoMu2wctg== X-Received: by 2002:a17:902:ce8c:b0:207:6d2:1aa5 with SMTP id d9443c01a7336-20aed0b299cmr27872445ad.13.1727158234439; Mon, 23 Sep 2024 23:10:34 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20af17229c9sm4344885ad.85.2024.09.23.23.10.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Sep 2024 23:10:33 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Qi Zheng Subject: [PATCH v4 01/13] mm: pgtable: introduce pte_offset_map_{ro|rw}_nolock() Date: Tue, 24 Sep 2024 14:09:53 +0800 Message-Id: <1a4fea06f8cada72553a8d8992a92e9c09f2c9d4.1727148662.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: E2DBF4000B X-Stat-Signature: m1rsk4sy655h9um3aj1m6wbnmqxnfkm5 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1727158235-675915 X-HE-Meta: U2FsdGVkX189+0YIvfVGOoADiQS6vrojRX1mmkP2ewu6ReM5UK8mzy3xCpz7HycUniZgGfiIQyQdLpYSRAbdDk3JfjDnqlPx56YkKYErYjc5KLs7hpXlwlxO0kr3zWGhzRYcK1UqeTlUldrtzyR1hE5nwLsdzF/TeSV7plO/oSa1XJFeoEmAxEYND5IhkI08OOcIWfmiISMi4AtZBeWEivg2AbwuqrtmyH4E+dGnupodRL3+0hCwXt48ZcPkxIjDB4PEjqq4jlEwYGJiIkRa3yeoeuDNxEI/k0eYizyL63b4ucGlXazjXVH2T7lCcLgPBO99KkfgLfdXMebXm4LcBjfhiZ1LwIHP8dN7sdVa/Hm9frbf9KRNc5ILdwLxIkzkUh9NWU/c2CbzouFK7/5UBS+GZYQYR7VpOoTNTkHlmIC0gt0JRp+VPl04IDf+R/Lm23sTOqWFe+T3fcgN2ADJBeSu769wDVusesgamO01fhACGHuISekVVAR/vfq6U+qLaJBvIUVJbkBp27v/fhI276WKlbuvHhCaTFv7ZxaW77vEx5hwbHBASC9MbYc/oLEP6RrrKDh/rQlEZtviy16H+bQVP+6eOb3J71cB7oeGFPVFlXEHyL7oo1VUWPpEVdFxP3r9WZ6LABN5PWB1mxX/98bjQ8xH9YlAAz61noU2ez9tjRw1MK3+J9BB48NGLc48S/aS+ouspWcA0VvB4jbaNOGue6imIw9ls0DYYgN1wvXJuBAERHgeMIQsyIIZQ9z6X6BNuSUZVVM6zip98S+VbTQsRlqjKtqENqgZoCdqYISLHurjK7V0Hn1WyM0FH4rHNCUbFButZr53mpGIHfS6wIbTtB/GhuVLWUfEVXmI9x8VtHAvSeKkxXE7iVm+z8OEcCWoT/fmBSNBLrrrXhm2/KHLP063agq2iG0croUmwNY0/mY++lp1vskhWYIC7Wl5QrsMFLtzZzBANE0lELw KhRQKJO1 W1GYMDWY2Es2aLx4ULov0dj9SxGAQW4AEK75vfyHrHk2l/A8//RLqIPLsxk+R3/9NKfHkNpjoH/pzt2gFuioVVlTFFpZXZdUBMIQudw3caFz/V0+TAksGLNCOhBA2x7ILypZiRiQ1uM4ywnAkuDgEiUtLgOPu/HAbRPFF8Houyl+vmSBWqAtM0QrLFOuIQG9ur/nh/EhnLVRdzQayb8bfNYOxbpGVWcqcxDC2t+Lf+5O+PB7UP+JFDvVJ1pCLGBJdIRnereRW94j9SMIubE+6FEMTYzhQVN/N8Y2H+VfuAREB2mqcd1bJ6QfN5Vyrqat/U3cOBqeBhXJ/L1rzORxtUKX3CEJjR2VsHnab8pG7bDRAIQsWw9qxF14KjSNKmtnXdUO6dRWeyHOIS2wCAko8MlFx4KsmeBZqI6QdtCuoo1iHt8C0Sb7JGWf+Cw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, the usage of pte_offset_map_nolock() can be divided into the following two cases: 1) After acquiring PTL, only read-only operations are performed on the PTE page. In this case, the RCU lock in pte_offset_map_nolock() will ensure that the PTE page will not be freed, and there is no need to worry about whether the pmd entry is modified. 2) After acquiring PTL, the pte or pmd entries may be modified. At this time, we need to ensure that the pmd entry has not been modified concurrently. To more clearing distinguish between these two cases, this commit introduces two new helper functions to replace pte_offset_map_nolock(). For 1), just rename it to pte_offset_map_ro_nolock(). For 2), in addition to changing the name to pte_offset_map_rw_nolock(), it also outputs the pmdval when successful. It is applicable for may-write cases where any modification operations to the page table may happen after the corresponding spinlock is held afterwards. But the users should make sure the page table is stable like checking pte_same() or checking pmd_same() by using the output pmdval before performing the write operations. Note: "RO" / "RW" expresses the intended semantics, not that the *kmap* will be read-only/read-write protected. Subsequent commits will convert pte_offset_map_nolock() into the above two functions one by one, and finally completely delete it. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song Acked-by: David Hildenbrand --- Documentation/mm/split_page_table_lock.rst | 7 +++ include/linux/mm.h | 5 +++ mm/pgtable-generic.c | 50 ++++++++++++++++++++++ 3 files changed, 62 insertions(+) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index e4f6972eb6c04..08d0e706a32db 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -19,6 +19,13 @@ There are helpers to lock/unlock a table and other accessor functions: - pte_offset_map_nolock() maps PTE, returns pointer to PTE with pointer to its PTE table lock (not taken), or returns NULL if no PTE table; + - pte_offset_map_ro_nolock() + maps PTE, returns pointer to PTE with pointer to its PTE table + lock (not taken), or returns NULL if no PTE table; + - pte_offset_map_rw_nolock() + maps PTE, returns pointer to PTE with pointer to its PTE table + lock (not taken) and the value of its pmd entry, or returns NULL + if no PTE table; - pte_offset_map() maps PTE, returns pointer to PTE, or returns NULL if no PTE table; - pte_unmap() diff --git a/include/linux/mm.h b/include/linux/mm.h index 546a9406859ad..9a4550cd830c9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3017,6 +3017,11 @@ static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp); +pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp); +pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, pmd_t *pmdvalp, + spinlock_t **ptlp); #define pte_unmap_unlock(pte, ptl) do { \ spin_unlock(ptl); \ diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index a78a4adf711ac..262b7065a5a2e 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -317,6 +317,33 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, return pte; } +pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp) +{ + pmd_t pmdval; + pte_t *pte; + + pte = __pte_offset_map(pmd, addr, &pmdval); + if (likely(pte)) + *ptlp = pte_lockptr(mm, &pmdval); + return pte; +} + +pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, pmd_t *pmdvalp, + spinlock_t **ptlp) +{ + pmd_t pmdval; + pte_t *pte; + + VM_WARN_ON_ONCE(!pmdvalp); + pte = __pte_offset_map(pmd, addr, &pmdval); + if (likely(pte)) + *ptlp = pte_lockptr(mm, &pmdval); + *pmdvalp = pmdval; + return pte; +} + /* * pte_offset_map_lock(mm, pmd, addr, ptlp), and its internal implementation * __pte_offset_map_lock() below, is usually called with the pmd pointer for @@ -356,6 +383,29 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, * recheck *pmd once the lock is taken; in practice, no callsite needs that - * either the mmap_lock for write, or pte_same() check on contents, is enough. * + * pte_offset_map_ro_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); + * but when successful, it also outputs a pointer to the spinlock in ptlp - as + * pte_offset_map_lock() does, but in this case without locking it. This helps + * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time + * act on a changed *pmd: pte_offset_map_ro_nolock() provides the correct spinlock + * pointer for the page table that it returns. Even after grabbing the spinlock, + * we might be looking either at a page table that is still mapped or one that + * was unmapped and is about to get freed. But for R/O access this is sufficient. + * So it is only applicable for read-only cases where any modification operations + * to the page table are not allowed even if the corresponding spinlock is held + * afterwards. + * + * pte_offset_map_rw_nolock(mm, pmd, addr, pmdvalp, ptlp), above, is like + * pte_offset_map_ro_nolock(); but when successful, it also outputs the pdmval. + * It is applicable for may-write cases where any modification operations to the + * page table may happen after the corresponding spinlock is held afterwards. + * But the users should make sure the page table is stable like checking pte_same() + * or checking pmd_same() by using the output pmdval before performing the write + * operations. + * + * Note: "RO" / "RW" expresses the intended semantics, not that the *kmap* will + * be read-only/read-write protected. + * * Note that free_pgtables(), used after unmapping detached vmas, or when * exiting the whole mm, does not take page table lock before freeing a page * table, and may not use RCU at all: "outsiders" like khugepaged should avoid