From patchwork Mon Dec 30 09:07:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923111 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 595BF1A23AA for ; Mon, 30 Dec 2024 09:08:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549735; cv=none; b=GXZwB9vSXkP2y9M5tRV3EKSSl0HfBxSyMn4CWueae+i6Ssk2ZETV27RzmB/LL8cePjoCViOO536AK0kVgcYS5ICOisAvdgw3CP7ymOPeXx+hI/9V7Kd6UIeXQ8nbqQtZynxoIM5fwgBX4YdlZRN5a3vvb1aexx+reHdFmwud5/o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549735; c=relaxed/simple; bh=eSQ1cqjI0k4SQFzb/dlIm3hCd3v5CZjZJpupq9IDdtI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BirTv9mo3jEo/8EaQ7Wkvj7bpb90Pq5VsvoKov+9SVZ03t75YpvJflkJ4CPoawt/lJJgnpXeGxGAuiEmVFhhOKwpCgnInSqRnwr0WqRM0IbNfXpbwmskkG6hoK5bWjd2P0F1k2qk6joEyt1mcGg7Gk0OPlP7EuiQgNgTkFWaiRU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=fkkQ/Cxs; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="fkkQ/Cxs" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-21631789fcdso87865145ad.1 for ; Mon, 30 Dec 2024 01:08:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549732; x=1736154532; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SyzxI6H4ZAISLyWZ1lhQ49SW3wSD1SDR6GIh5AHA3wQ=; b=fkkQ/Cxs0W3KUmzY4uZ7gTo6taPzqts1z1s+LhKxsSEnR3BoLO5qnre0TuWPFqjdiI oZbHhtRswoYOz51aj23l6M1fZjo/I1mBXx+a7Iai15xRuXyvAHJQHi+jOiOb1Kt1iiWj b9yugJcKfuBfMcahL5WcGTkQ2AOO159KQg7pL88kbXV2DsKPGDYO3zoODj1R8Nz1+IVM 3dgWvyPIq+WT9jjP6YepgFxz8nnK9nW1tA34qbI1rSfyWgLVXlRV8s7tdv2RlahIzFgG OZKrVMu0Gki7Vizl9XGtt8CU8qyhtBGzRgbciWyjzphP+sqVbQqRU3xSu4Lv4riW0t9C jXAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549732; x=1736154532; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SyzxI6H4ZAISLyWZ1lhQ49SW3wSD1SDR6GIh5AHA3wQ=; b=eYDpHRQ1FPBXaaEtArmfRDYJi9hJHmhnYMK5YMs4RD7ti6uJexn6gRpoP+ZmsipU4w p5Hyg34wvMfgDGsrc2NRKYNy2RRqUxia9bwJgx2bcje5rfjL5LJOAapq/eMmpYGw81ge TgLMRequKMiFb29sN9i7nt3oFKFk282OOyVBtvGvkZ2EtUoN5dWu65QOXzPcaUSk0Ac3 YEoRJ14qYy3fa4Fl6gTt1BXdCDapCKlDx6HLwH48ZLz1qET1XzRrzwnLK2zmbcLjz3k+ Xb6V/EcKo41mOAoLFK9yrOFv/ujQT/wnQxnDFr137aF0GgMD11TBAtuAJRwnUpKrydu6 Jv6A== X-Forwarded-Encrypted: i=1; AJvYcCUoMc4nadf3mxPnUJcvVeQ/zjRZsCFeLlf918GYs0c15TVIV8H3TdFjZAqdmSEo5HTB2LzcVF8dxg==@vger.kernel.org X-Gm-Message-State: AOJu0YyFESsAzmGXNHIc+6MWlXZvWZsTPTeYgUfJNT5v8wg6qMCkXfTP 192FXQNV3a03wZiLAF/83Y3/aaTaukLW2lQUi6lIGXokWqRKt2UejVV0vhSDwWk= X-Gm-Gg: ASbGncsCAIQAgpAkNN3ltAVwoWOvMEOYLrSXPMQDQOU3rXD1UfIOYlLWXRYJN+FRt/T PX0mXMuGrm9KSZLbqdiSQFYqObPtIx70h275FivSYCDBN3KiyKr/coX9qh2v5zdSE0a0k6Fe3/v esPNROQnyZXRbwxl+vVI3VnZ0UaTg22XRjE675KKgSZPtcLj3B8OAO1LCNC64OwcAi7ng7R5fMs fqTt0Wk9mTwGsN0BhXrjb3G6rYVfvPGfNZfGFDw+zWOGMUmFBLdXfeSW2hc+Rcd2ZwlIYkJuq96 70yjOq2GxHMC+XJWlULtjQ== X-Google-Smtp-Source: AGHT+IEXAwBIqSClrkb4EYoc2OhmhpgilmCz/IsEh6Chc/FsWIzsRB3p/FpLbUSGkXAy0ZMKjVVExg== X-Received: by 2002:a05:6a20:4320:b0:1e6:50a0:f982 with SMTP id adf61e73a8af0-1e650a0fa30mr7283235637.20.1735549732420; Mon, 30 Dec 2024 01:08:52 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.08.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:08:51 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 01/15] Revert "mm: pgtable: make ptlock be freed by RCU" Date: Mon, 30 Dec 2024 17:07:36 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This reverts commit 2f3443770437e49abc39af26962d293851cbab6d. Signed-off-by: Qi Zheng --- include/linux/mm.h | 2 +- include/linux/mm_types.h | 9 +-------- mm/memory.c | 22 ++++++---------------- 3 files changed, 8 insertions(+), 25 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d61b9c7a3a7b0..c49bc7b764535 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2925,7 +2925,7 @@ void ptlock_free(struct ptdesc *ptdesc); static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return &(ptdesc->ptl->ptl); + return ptdesc->ptl; } #else /* ALLOC_SPLIT_PTLOCKS */ static inline void ptlock_cache_init(void) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 90ab8293d714a..6b27db7f94963 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -434,13 +434,6 @@ FOLIO_MATCH(flags, _flags_2a); FOLIO_MATCH(compound_head, _head_2a); #undef FOLIO_MATCH -#if ALLOC_SPLIT_PTLOCKS -struct pt_lock { - spinlock_t ptl; - struct rcu_head rcu; -}; -#endif - /** * struct ptdesc - Memory descriptor for page tables. * @__page_flags: Same as page flags. Powerpc only. @@ -489,7 +482,7 @@ struct ptdesc { union { unsigned long _pt_pad_2; #if ALLOC_SPLIT_PTLOCKS - struct pt_lock *ptl; + spinlock_t *ptl; #else spinlock_t ptl; #endif diff --git a/mm/memory.c b/mm/memory.c index b9b05c3f93f11..9423967b24180 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -7034,34 +7034,24 @@ static struct kmem_cache *page_ptl_cachep; void __init ptlock_cache_init(void) { - page_ptl_cachep = kmem_cache_create("page->ptl", sizeof(struct pt_lock), 0, + page_ptl_cachep = kmem_cache_create("page->ptl", sizeof(spinlock_t), 0, SLAB_PANIC, NULL); } bool ptlock_alloc(struct ptdesc *ptdesc) { - struct pt_lock *pt_lock; + spinlock_t *ptl; - pt_lock = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL); - if (!pt_lock) + ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL); + if (!ptl) return false; - ptdesc->ptl = pt_lock; + ptdesc->ptl = ptl; return true; } -static void ptlock_free_rcu(struct rcu_head *head) -{ - struct pt_lock *pt_lock; - - pt_lock = container_of(head, struct pt_lock, rcu); - kmem_cache_free(page_ptl_cachep, pt_lock); -} - void ptlock_free(struct ptdesc *ptdesc) { - struct pt_lock *pt_lock = ptdesc->ptl; - - call_rcu(&pt_lock->rcu, ptlock_free_rcu); + kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif From patchwork Mon Dec 30 09:07:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923142 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A95B1A23A9 for ; Mon, 30 Dec 2024 09:09:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549748; cv=none; b=dlQmtHKtMQrraEjbPmr5FjNoQlWeyvhunNKdTbLuv+e0c1hzpwRnsjI0vwqusThEZk9krPYg9n0l7yATr7LElcG+E13k9PEhIwPbgzH5ZTooVhvnFBnSGUWTlukEWDTq+h2qsWZ5G5Zt9s5e4hhWLVmdDeOUssDfY2tw3DHSKxk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549748; c=relaxed/simple; bh=PKxGSwR+56aVOtqiYfVzo+PhXISkTKu7YVtS9mMtvvo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fvqMWtf6hUAk7AS5+YbRaG1kAGIvcwekZTW7/w0LjeRDKAj2z9KwG0/YM1wZlFHyeVCO3/wsoB7hg5qYrwa+g176Uw7YeDAvylGAoeb8FI0yBiIQ2fZbgYxMBe2JJS91oVlir3PJrgBPtDxv+uP9tqQqQL+efp9PIAHj1UPmnc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=UqK3sU+2; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="UqK3sU+2" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-219f8263ae0so75611425ad.0 for ; Mon, 30 Dec 2024 01:09:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549745; x=1736154545; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mHzGZQny5JmimEm0kycepUlCOKttN6NXbKtm7VNKgbw=; b=UqK3sU+2wcAUPiAKKXdtQ0aEJDqkRo6Yya3Wbj9oIqILyZNKbqjce1teTm/9Pe/6TJ TT8TcgAmn7YerrmNKsaGqC/WVERQJSYXRT1OqR6pmXQ/mhI169SR8N03MZJaP4hc5/U7 Y68xqwOFUJu5yye3K1J4GXggf6nRlCdi08h2F1f64ee3FILFskQeGXmz7dX8Kdt6LEXA DAFVaNIPS1JpGxoZ5OiMJSwjmI+OFqDDuwo8UB4xy1LXw3q0u1V1kadgXpWOzHB45i65 ZYWnCzkSVkqGj0YzoFzawN2g9hKsI1PUKJX/uPdhmHDeFZ+JqSgtP3rgsKecpfYbHO3w anMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549745; x=1736154545; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mHzGZQny5JmimEm0kycepUlCOKttN6NXbKtm7VNKgbw=; b=qViAvMPpZbDfJf/X1yd28qjTLMdksRRJy0mSClUiLnkpw9gbUzoo+6TUmO9r4OrzfU bu7T1/nTEi+4Vb+xhXAW3Y9TmQkGIM7asppqioL+HELlt6ZDEbUgv9nKASC5CZwKQXCi vjyFC0/HhZfvKnJHZ1mH80vFeJzHNjz1B9Bicdu/pGxh6pWe7Y/UeJrPh8ymp6hdBB2i 5oG9GYWcAn+xyKK6+DqcClUhHsWNTkP9pP4dSNMIfn6+Y3YYg+U8iJnvDNpl/aJCbXbS cwjqt34ushCrqfFop5QH2TDAQt/e8VOJ2Lz4HytboXazEmJitCeNS+5xznHXsSb9uO5r 7/Sg== X-Forwarded-Encrypted: i=1; AJvYcCWlv/xJjXBvIdKrEdS57GcY+wKd0HLvfbbQAHq+xXX8NCvDMCYDbOMISzZT5acb5BJCU8Ftto3LrA==@vger.kernel.org X-Gm-Message-State: AOJu0YwIa3Fiu2T0KnxWfglkIgGpktCi17VkKE1bhs7TcXpGXEONLTud esLe+GDBovZzL7Hqug3Ec0kLKZmKRI26hTc/yVVSRbSyPjY5Gup2h/4IlggeRcA= X-Gm-Gg: ASbGncv27YL3NsF+nuVFoSngmPM5u8UW1QW9f7+eTscP+b8qv5/Op9v6nUxQAzLVkYh 1AhctUfd5JNUZPeUcAgTvgDfeUrNOqb5+wQbp25i6RjqiyZ4sTgF/Z8GsPkRbu8SP2Dg3cYMWr7 dmwAg7cXk0dKbYxQ9uW8yg0xCssawj447GjoquHWquXpW5r0hucKbae6dYNJ2QyqSSkOW1LvUHV IeyHl2uKnnTCkKdImkZ7TA9JrQbzALDpd07ooj0L7KG3yC8t4hafmz/q9oUbiHb0oDKnHydIwJZ jmNnemkp4hbFi7yP44CX+g== X-Google-Smtp-Source: AGHT+IEdmvGR6U/qwRYH5ruA02a+sFRctKWchidIfEBAoumo12DzBepIE5K3QuQVJJ60xG/AvigJ4Q== X-Received: by 2002:a05:6a20:a127:b0:1db:ddba:8795 with SMTP id adf61e73a8af0-1e5e08028cfmr53967973637.36.1735549745320; Mon, 30 Dec 2024 01:09:05 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.08.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:09:04 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng , Palmer Dabbelt Subject: [PATCH v4 02/15] riscv: mm: Skip pgtable level check in {pud,p4d}_alloc_one Date: Mon, 30 Dec 2024 17:07:37 +0800 Message-Id: <84ddf857508b98a195a790bc6ff6ab8849b44633.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Kevin Brodsky {pmd,pud,p4d}_alloc_one() is never called if the corresponding page table level is folded, as {pmd,pud,p4d}_alloc() already does the required check. We can therefore remove the runtime page table level checks in {pud,p4d}_alloc_one. The PUD helper becomes equivalent to the generic version, so we remove it altogether. This is consistent with the way arm64 and x86 handle this situation (runtime check in p4d_free() only). Signed-off-by: Kevin Brodsky Acked-by: Dave Hansen Signed-off-by: Qi Zheng Acked-by: Palmer Dabbelt --- arch/riscv/include/asm/pgalloc.h | 22 ++++------------------ 1 file changed, 4 insertions(+), 18 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index f52264304f772..8ad0bbe838a24 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -12,7 +12,6 @@ #include #ifdef CONFIG_MMU -#define __HAVE_ARCH_PUD_ALLOC_ONE #define __HAVE_ARCH_PUD_FREE #include @@ -88,15 +87,6 @@ static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, } } -#define pud_alloc_one pud_alloc_one -static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - if (pgtable_l4_enabled) - return __pud_alloc_one(mm, addr); - - return NULL; -} - #define pud_free pud_free static inline void pud_free(struct mm_struct *mm, pud_t *pud) { @@ -118,15 +108,11 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, #define p4d_alloc_one p4d_alloc_one static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) { - if (pgtable_l5_enabled) { - gfp_t gfp = GFP_PGTABLE_USER; - - if (mm == &init_mm) - gfp = GFP_PGTABLE_KERNEL; - return (p4d_t *)get_zeroed_page(gfp); - } + gfp_t gfp = GFP_PGTABLE_USER; - return NULL; + if (mm == &init_mm) + gfp = GFP_PGTABLE_KERNEL; + return (p4d_t *)get_zeroed_page(gfp); } static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) From patchwork Mon Dec 30 09:07:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923143 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAF901A23A9 for ; Mon, 30 Dec 2024 09:09:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549761; cv=none; b=M/xGsC8Daji0BmEhslMTtnKyYQBb1voyuVLW/1e4SpDmcHf+EwdQTm2T+yPoRRfzirZooeMb3B2nZXhQ+bKipH6aG+sVTaeSmM6swKa5ywKqpyl8htpTY5aCJXflhR/spkcA0MB+GsQm1AJj1De0n692Ey7w/wumFhBZHNtGoPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549761; c=relaxed/simple; bh=VQrwZRR3rAwbwOGD5AjoKk2Z15695i3scVkWYvbG/zY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=df5PXOWirdLF5Ky1y5D3XjKIvuQJfaKIUpmgD8//nDsnC8OeWTvnFd06nGLWAADU93DRTKHiN1/xE3qSJtdomi0hJ8IHDVooBeOQFrWHVRlNfCgbX3FjBrZEQkCjamxI8ZpAyC45LRuM+qWipA34bsxprfSGkewHcbwv3ygNCEk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=W1EIz8Sw; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="W1EIz8Sw" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-21636268e43so17176645ad.2 for ; Mon, 30 Dec 2024 01:09:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549758; x=1736154558; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hi4+N7L/R07Fq6g8WRpMxop4cD3t01a1BHl7awbkQjI=; b=W1EIz8SwrFGzIPvVSTgGDInoWHqdgxJVSp7apqZTLAW2HlHtEDtnjUqJvIlQo4bzVk /syWMiQA39pCZO0/DoZFOC7QUkscXEDiaF3ebSv+siTPp6PVBqsSrpR2+wXCyVoMPkdI EkbBfs9Thjr9x3yll1cZ/P0uQDt+pE/zbhF90dwCdHI5uhVQ/nLfJIK/7sAyKOSqxiuV pAHk/GdRvVhYljIzy0buCAImw0PnA2xuaKNQZQhkkPTODHZcWLIfV4Pm52THd3LRcpoj 82dMp3Up3sE/+SaD6UMBzza195ZuehCMECQGAZP2IOmSDzlirqTw6dNJtJhuS6VJswbZ yv5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549758; x=1736154558; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hi4+N7L/R07Fq6g8WRpMxop4cD3t01a1BHl7awbkQjI=; b=Kfw0JmGNyieSp+PhL4+edtgTPosDrWLIHAN5I8MTma2W0ZY+cqb+Koy+paPBiLVUcA Cl2Zh//jktCsKgR34HXgdKkZN2ibVXRT5suYGIHtxL4ba9rF9TcbicMggl9V/oX6bAbC z1XgrlHOkJfSLS8yZq/KDgpmmoSwDGt8RICirgQh53plOrPKgJrLS51+IXOWUNY6jpiD 2IjZ12k1OOHlZoCMzathyyeqcnQFlmAiW9pXKqCe3GCteJ7rzW5f5M/6u6haZ4bM7Jic C5+CIfIOCQ3bdduagxY6dOb7S9meU840WrtNhKUcFnLxpHiBUy4WFXHlL2gVpV8PJcnD cDNg== X-Forwarded-Encrypted: i=1; AJvYcCUOxi3ZdLJF8SrMBUJLKH94tZps5/kvqflHHA/IYoL0KDc6z7lNnXITBQYiZyac9DhfwxV6opEcRA==@vger.kernel.org X-Gm-Message-State: AOJu0YylReKq09GAl5wXDy7AdmgXI3rCAADQVPjAMYqJuW6rgphr5117 dWok/sq2sxoxhZZO0eclSFVr/LvWQ5kVPaxarf/v+0vEcgFgaBlB/8AcuoJT6Yc= X-Gm-Gg: ASbGncun9UA0N9XWrPndMHRRwVxH7VzbribLpOvqzmtCG83hrwSvVH1IaxHIILjVu5Y s8KwxU51OwidsXla6upZfDsQZQBD8z0XdLKWw72jyOS3AlpN8e6S4/B3Aw9d4cGH0c8uBVLr9M1 5H7hfFfg/WwwVEspFXVRUumcrZ0XvFSG+JDmXV+fXltq3o0ww7VWbj0fZ7WkH7t0YS6bI/W6f1e 2tVbiqftaHiPPOFtdJ90jgzw3DIAC6eulWU/4p20WZGaCHGmW58ZTRj/x+fX6agEy3PCqa16U7Q FaVH7pcDdNZFg9M+7kOk2A== X-Google-Smtp-Source: AGHT+IHmMXvIfmrfsWwSSN68U3q7TJmVN0XL9C78+ztBjPSSvkZSld2lKuaEpxPQ8tk0dZT2wkXZcA== X-Received: by 2002:a17:902:d488:b0:212:4ac2:4919 with SMTP id d9443c01a7336-219e6e9f9aemr366243385ad.17.1735549758001; Mon, 30 Dec 2024 01:09:18 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.09.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:09:17 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 03/15] asm-generic: pgalloc: Provide generic p4d_{alloc_one,free} Date: Mon, 30 Dec 2024 17:07:38 +0800 Message-Id: <4c4bcc1aa565c6252183553aecd5e5cbd1a0f6ea.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Kevin Brodsky Four architectures currently implement 5-level pgtables: arm64, riscv, x86 and s390. The first three have essentially the same implementation for p4d_alloc_one() and p4d_free(), so we've got an opportunity to reduce duplication like at the lower levels. Provide a generic version of p4d_alloc_one() and p4d_free(), and make use of it on those architectures. Their implementation is the same as at PUD level, except that p4d_free() performs a runtime check by calling mm_p4d_folded(). 5-level pgtables depend on a runtime-detected hardware feature on all supported architectures, so we might as well include this check in the generic implementation. No runtime check is required in p4d_alloc_one() as the top-level p4d_alloc() already does the required check. Signed-off-by: Kevin Brodsky Acked-by: Dave Hansen Signed-off-by: Qi Zheng --- arch/arm64/include/asm/pgalloc.h | 17 ------------ arch/riscv/include/asm/pgalloc.h | 23 ---------------- arch/x86/include/asm/pgalloc.h | 18 ------------- include/asm-generic/pgalloc.h | 45 ++++++++++++++++++++++++++++++++ 4 files changed, 45 insertions(+), 58 deletions(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index e75422864d1bd..2965f5a7e39e3 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -85,23 +85,6 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp) __pgd_populate(pgdp, __pa(p4dp), pgdval); } -static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - gfp_t gfp = GFP_PGTABLE_USER; - - if (mm == &init_mm) - gfp = GFP_PGTABLE_KERNEL; - return (p4d_t *)get_zeroed_page(gfp); -} - -static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - if (!pgtable_l5_enabled()) - return; - BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - free_page((unsigned long)p4d); -} - #define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) #else static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 8ad0bbe838a24..551d614d3369c 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -105,29 +105,6 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, } } -#define p4d_alloc_one p4d_alloc_one -static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - gfp_t gfp = GFP_PGTABLE_USER; - - if (mm == &init_mm) - gfp = GFP_PGTABLE_KERNEL; - return (p4d_t *)get_zeroed_page(gfp); -} - -static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - free_page((unsigned long)p4d); -} - -#define p4d_free p4d_free -static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - if (pgtable_l5_enabled) - __p4d_free(mm, p4d); -} - static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index dcd836b59bebd..dd4841231bb9f 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -147,24 +147,6 @@ static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4 set_pgd_safe(pgd, __pgd(_PAGE_TABLE | __pa(p4d))); } -static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - gfp_t gfp = GFP_KERNEL_ACCOUNT; - - if (mm == &init_mm) - gfp &= ~__GFP_ACCOUNT; - return (p4d_t *)get_zeroed_page(gfp); -} - -static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - if (!pgtable_l5_enabled()) - return; - - BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - free_page((unsigned long)p4d); -} - extern void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d); static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 7c48f5fbf8aa7..59131629ac9cc 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -215,6 +215,51 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) #endif /* CONFIG_PGTABLE_LEVELS > 3 */ +#if CONFIG_PGTABLE_LEVELS > 4 + +static inline p4d_t *__p4d_alloc_one_noprof(struct mm_struct *mm, unsigned long addr) +{ + gfp_t gfp = GFP_PGTABLE_USER; + struct ptdesc *ptdesc; + + if (mm == &init_mm) + gfp = GFP_PGTABLE_KERNEL; + gfp &= ~__GFP_HIGHMEM; + + ptdesc = pagetable_alloc_noprof(gfp, 0); + if (!ptdesc) + return NULL; + + return ptdesc_address(ptdesc); +} +#define __p4d_alloc_one(...) alloc_hooks(__p4d_alloc_one_noprof(__VA_ARGS__)) + +#ifndef __HAVE_ARCH_P4D_ALLOC_ONE +static inline p4d_t *p4d_alloc_one_noprof(struct mm_struct *mm, unsigned long addr) +{ + return __p4d_alloc_one_noprof(mm, addr); +} +#define p4d_alloc_one(...) alloc_hooks(p4d_alloc_one_noprof(__VA_ARGS__)) +#endif + +static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) +{ + struct ptdesc *ptdesc = virt_to_ptdesc(p4d); + + BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); + pagetable_free(ptdesc); +} + +#ifndef __HAVE_ARCH_P4D_FREE +static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) +{ + if (!mm_p4d_folded(mm)) + __p4d_free(mm, p4d); +} +#endif + +#endif /* CONFIG_PGTABLE_LEVELS > 4 */ + #ifndef __HAVE_ARCH_PGD_FREE static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { From patchwork Mon Dec 30 09:07:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923144 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 234921A239D for ; Mon, 30 Dec 2024 09:09:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549773; cv=none; b=L98Lp79MnYxT+A26s+CXWsTvOG/nNYBhYSBsrdGUgaGpIdu8xzAQs8QMMfSuC41uGFIBs4P6yuJ+4DLL0pjTQrayDi3OgdjUVT1u08fRsFABoktLqPS9YeC+ES8vqgF8xTUrbuXzX4L4QecB6twMb9idyN7gUPGrl2Rc1kmi3bQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549773; c=relaxed/simple; bh=iGGseqx9Ut5ViSPAG0PhwW+aGehMNfQFpL91q5ypQVw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=W8V1u/NRUNlmlnJEFtk1moCbE1PRld6zQ+wQc7FcMXlWFWTh7VIZNXyxvGYiUJQuJjQSoh/sgO3u/Evg8VHAzT/jqnyPZQSpe6j8BrDFOfC4KU0RSgf9MRpHUP3KwSRtWyT+ePP64MHJMGCxE3Jh1iKnAyuuionPOu1GLKvjmDQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=LU4QW+MN; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="LU4QW+MN" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21670dce0a7so17227265ad.1 for ; Mon, 30 Dec 2024 01:09:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549770; x=1736154570; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t6vBY23hDAeU19z0pXyQ9vuAeZavJyfGUN8npoqwlxU=; b=LU4QW+MNOhbE1wMpxZf5W8BWtU4X4oesBCIsibBGNOHSaMuRf/wJZx1ELNXh9cA7hS 89G5ezs+gPAJPOFoQsy9KLlYwcajFd4kC7pkPs0o118RQTdClzCk/GCrvuGtdYHN0TCw /T2HwLN7LR5bih6T5OKPqLyUw1eZSumtHmJZ2VHa9b6542IXnxnqYSTmdUPOzu1X3e5u 9LxelDQFkyZ/Mvfr/D+5OWJfteXzxmKSpnEe/hMSkBBmMj19bN1uIxrFYKYbg5VamT7j +CLw+iNk3Lx6FpLsOOexKakWrkKYcE7rkZPaQayuWIWfdyvCzdHtWaP26ygOogTCPmed 9EzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549770; x=1736154570; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t6vBY23hDAeU19z0pXyQ9vuAeZavJyfGUN8npoqwlxU=; b=A1CaY5VzMYt4PNyWHIiuFStIx9eYPVQNYiJ+yaaLWM/G4k5lqDRLzawOVmMggHHzFt 1QMbFwID5fbCfitJiOn9VQIN+U1t/U9lIIG7fOXDudGaoEW67QlM/Tht2a98x693nPCB J2LisLVLgkKopelrJzY9dxi5i4L3kanVI+souo8BeTFfEU+JsRBnDCGL6C3e7r/WbSDG Q7FbOWjlkQT1dH8pu3nyipHxe9dlmcXaljxSFBAff5UIhdU15W7gu332Kt20Z5yRx96F suBXEQfBWCl9NS8W5bsDvNsygSF1bDZ3I0V1yeTGOy2UUf7k6cWCb45uxN2oa0EiUJN9 f3Aw== X-Forwarded-Encrypted: i=1; AJvYcCXtosm8qlrGhu0Gg1Pj1dNv0CDOSdErP+Y1OMzLSH3AP6XgMLfHU0hAe7CdckaDTNXJ4nN3VoCSNA==@vger.kernel.org X-Gm-Message-State: AOJu0YzKr6fSIOwR1v9kR/e6EYiKGuWjZ19hm4xn6Nz2z5xPgT5yt8k1 U0QryOJjWHNp6EOx6wao4HWjoQR3Ibi9jW6zUMxDHDEuMdl+X/GbMSaTiasHugQ= X-Gm-Gg: ASbGncurx04CX1mcxbP+l2q13bqBOBv+ldER38EfT9I6QLSxx0teUEGI33QyaOU95wA d5CelU2pREIUZk65pyf1zAV0jHqkBLku2i/GaK84bU61+ygbdPzRr4A3WScrZHiZM4gk2Ob/FJP dnvD6JesUCZy8mtIbzUam/neHH384w/NmccPTqIDIzrsJTnINEFpD0yLEFVWfNVmTdvAm94fm2n OgO+6jDdsqezzj9pEjNoKxZxJX5gKufh1/qxh8dnYQx9rKo56Ru+Rj6OntOf8Af+vXDzBvD3buD f0Wluq05bixcilzhX013rQ== X-Google-Smtp-Source: AGHT+IFhgZxpyzdRj713xWVIn5lnq5dMIdipHJPkhLMIEEbSziCo236oEOGt7kn6Wpg3lXZNnJqwCQ== X-Received: by 2002:a05:6a21:1517:b0:1e1:adb9:d1a6 with SMTP id adf61e73a8af0-1e5e08439e8mr55857078637.41.1735549770532; Mon, 30 Dec 2024 01:09:30 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.09.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:09:30 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 04/15] mm: pgtable: add statistics for P4D level page table Date: Mon, 30 Dec 2024 17:07:39 +0800 Message-Id: <2fa644e37ab917292f5c342e40fa805aa91afbbd.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Like other levels of page tables, add statistics for P4D level page table. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) --- arch/riscv/include/asm/pgalloc.h | 6 +++++- arch/x86/mm/pgtable.c | 3 +++ include/asm-generic/pgalloc.h | 2 ++ include/linux/mm.h | 16 ++++++++++++++++ 4 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 551d614d3369c..3466fbe2e508d 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -108,8 +108,12 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { - if (pgtable_l5_enabled) + if (pgtable_l5_enabled) { + struct ptdesc *ptdesc = virt_to_ptdesc(p4d); + + pagetable_p4d_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); + } } #endif /* __PAGETABLE_PMD_FOLDED */ diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 69a357b15974a..3d6e84da45b24 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -94,6 +94,9 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { + struct ptdesc *ptdesc = virt_to_ptdesc(p4d); + + pagetable_p4d_dtor(ptdesc); paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); } diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 59131629ac9cc..bb482eeca0c3e 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -230,6 +230,7 @@ static inline p4d_t *__p4d_alloc_one_noprof(struct mm_struct *mm, unsigned long if (!ptdesc) return NULL; + pagetable_p4d_ctor(ptdesc); return ptdesc_address(ptdesc); } #define __p4d_alloc_one(...) alloc_hooks(__p4d_alloc_one_noprof(__VA_ARGS__)) @@ -247,6 +248,7 @@ static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) struct ptdesc *ptdesc = virt_to_ptdesc(p4d); BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); + pagetable_p4d_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/include/linux/mm.h b/include/linux/mm.h index c49bc7b764535..5d82f42ddd5cc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3175,6 +3175,22 @@ static inline void pagetable_pud_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } +static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + __folio_set_pgtable(folio); + lruvec_stat_add_folio(folio, NR_PAGETABLE); +} + +static inline void pagetable_p4d_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + extern void __init pagecache_init(void); extern void free_initmem(void); From patchwork Mon Dec 30 09:07:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923145 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DC9C1A2550 for ; Mon, 30 Dec 2024 09:09:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549787; cv=none; b=nw6zTr6Kz+sp2PtQ4Gmc7uYT8THdn/PvW7+u4Vvi758R2CdF2I76zVgQq7jRm4QWahMp0zeexQSJZI6eu4lhzexj4MS1DncURQnR61O5DWq5xHHVC55W+J6fos5DXN9DPXCSXhvRIMlFYZ40W4kGCPp6sH8bqcmyk2n/JKCrzgM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549787; c=relaxed/simple; bh=jiyynG1cSpqMCUPKcTsBXQhnImP0khzMQlCaAfJaSPQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V2MmYNeRlTFMhsir/ILppucvWp46CzJy37OuHtKCmmY9EFvWRvyCFCGkv1eV+VaYsABXckyRwWQ751dUSex2bN1m71IJ7gio/Bfowf+SU48JXcuv7p32s1n7zAUg0jPqEI9iPqb+4ULsSwaivYITP5i9NLAjTWdUAWKNSi2BuLw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=fdipIao0; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="fdipIao0" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-216395e151bso83725465ad.0 for ; Mon, 30 Dec 2024 01:09:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549783; x=1736154583; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eYmLHVK0AtU6ilI7ucnXt5jLNeYE7S1uMrvnVL/VK/o=; b=fdipIao0MfSzxuzWBCZMky5alX88xvN/MbnJ7aPPBrjTGMYdonXH3cQrXGOHC5BHGE 1CbLRJGaZxP1k1OqLK5kcQIBeZFLwaPeflUsR6iUfioW884T1x2GaElMYG8WcXvobHur qSyL2ak4CGHxE0+qkv3L3oZZwquzLAy7uhHSGY06u61YsJVW1GyrK7KB3k7iLma6kKHd /A7xHVXQyHnufhe1DeBAMI+ir7bl7aIUJ5Q9fy+XvlRUZnCk2K+cttwdro0r0LiMX0h1 z1uBoeakBxn5ROMm17+siFl282KcIVBDdT0DWK2hUfV+gA1XeXj30OutE3Sw5VsybpCB af1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549783; x=1736154583; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eYmLHVK0AtU6ilI7ucnXt5jLNeYE7S1uMrvnVL/VK/o=; b=E78Qtfzzed8l5uMU7A6LbTOdy3j/lUZZ02w/JrUQZzJsOE7/MnrWXUW7eZwLcfwckv +wPrDvcHovhArkxKpDidj5fd32wITbtmovkLx8aZ6WHHS10amIdeqG/20gDrJ/s4EVdZ IraJ+luffH/sA6jWb92jqIEPys4VPIklD2s440GykoBObEuOVvsfNM+oJfvKvhVaUulT tNenFSD3gWoKD1fHx6kViOsFbpxkxejj8lxb22vwfeFjLqN83+PoWS51/Z9AAVAVPR1z qaBsyPQzXiaJusLkvy3DGyZsrNe+QpREWo8wveRKGvZW0T09CepC09uQjTU4T1noVPJY qCHQ== X-Forwarded-Encrypted: i=1; AJvYcCVb58nbFkYq2ULXSgXqckrEKKlnjKkCUZrvitgpbckgpDFABEPJqyciBEmerBHkmFTOIHc9cAmlfg==@vger.kernel.org X-Gm-Message-State: AOJu0YzMw1FKYXLpHI/4qqEZTtbdvGc5Q/LPL0EX0ahhJuQhqXJEplLx kTE7zs3edyR+OoUHDmkO5Qd50uMTu3QKngckWpv4oowrLSXCg7oWKjkkUAPwres= X-Gm-Gg: ASbGnctfJEYPkn/yhZV92TOpy1ky9N9KTwOjVvHpWb0W53zfGiMLkjHkqgf21I+A/qn Ljf+wFbgqfBANceIFBWNkKXKe342nmvHAnuLrKvjzSnj3wpmt9enAk/dJydJjV4IJAW9XWAXq0U yPYr4cUxOwJEbXbPRdtlISrVIi9jEAtrSTU6wvsEXOHgQRJ/bzW8RS37b8Pa2TsC/Sv/BTx0qNi GfzGPCv6mv7u7TR1jtICeYHlQF6a5MKlVRQvY1bSQaOF/nuVvwTbrcKXUyg5yKrdabAJIlE2897 oB7T3hQj2hG+MCxTC2Xnkg== X-Google-Smtp-Source: AGHT+IFA+DEWtAOVikEXL1Ne96u+hKYy4bbBxlgPSiON1NV20vNo54NDlhKj2iEhoHPJgNNx6Etn8w== X-Received: by 2002:a17:903:2a87:b0:216:3f6e:fabd with SMTP id d9443c01a7336-219da5cc2d0mr588821545ad.7.1735549782882; Mon, 30 Dec 2024 01:09:42 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.09.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:09:42 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 05/15] arm64: pgtable: use mmu gather to free p4d level page table Date: Mon, 30 Dec 2024 17:07:40 +0800 Message-Id: <7c12112047ac230809aacd0379259414b9b0d3a3.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Like other levels of page tables, also use mmu gather mechanism to free p4d level page table. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/pgalloc.h | 1 - arch/arm64/include/asm/tlb.h | 14 ++++++++++++++ 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 2965f5a7e39e3..1b4509d3382c6 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -85,7 +85,6 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp) __pgd_populate(pgdp, __pa(p4dp), pgdval); } -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) #else static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) { diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index a947c6e784ed2..445282cde9afb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -111,4 +111,18 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, } #endif +#if CONFIG_PGTABLE_LEVELS > 4 +static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp, + unsigned long addr) +{ + struct ptdesc *ptdesc = virt_to_ptdesc(p4dp); + + if (!pgtable_l5_enabled()) + return; + + pagetable_p4d_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); +} +#endif + #endif From patchwork Mon Dec 30 09:07:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923146 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B4F11A23B1 for ; Mon, 30 Dec 2024 09:09:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549798; cv=none; b=YijxpjdezGbDjJ08+eYHyTxmVNcpuO1nFmqK8wIt0HGh0Z4ug5kDdTjrcryKvCDz0DV2sT58rNgQur4aiiC6co5Y6xUO7DOCV9/WdjCf/NdegQsouIavprrtcxsgoCTuJ8aoyzM/OG0wdOPnBupkdA++8/NAnveFwIDkycgsnLs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549798; c=relaxed/simple; bh=gsn2MxaDHMIkVDeNJoAZ1GXzg4UbDMQdMp9AlgDJXWI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=G9mJ3scanhehlacZFF3JAOKHo/dd8ILp49OVpNDUA5SUL3vsHuXsRu0gIuQdxchJeu7MxkT4o1xSKi3JR0vV99bU8xmZC+9lpgOffevcAd4vLgCASg4ZdHiyO4ZNYkhGQDJtPsvURZdfv6j8Z03mzyQ55VovC4qYPxTbCnlX618= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=ZdDBIIoe; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="ZdDBIIoe" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-216728b1836so101117325ad.0 for ; Mon, 30 Dec 2024 01:09:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549795; x=1736154595; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xY41yoAggnrpKZDOvMIy0+FhpPRF9QFn+bDK7HKPzWw=; b=ZdDBIIoelCJjmLBiAvvITNIgs9m6LIoCNsC5x8RDr8WBQ/EfIiCbfp7WwIxMKjwY13 m9Tu8tZuobXFHMie0svrZpxYb810XVGhwhsiiFS8xegb+19Z0mkzzb6zr1p06EpCB00q ASZJz5wujN5irKhpTpcdgLubw2t3o2KQodRClOS/h3Ch9DQG7pEEEnWG3WtPDFMqw8rC nNxn6lZW81uNhA+v5NPe4AAUVn5YS88g8RFWQfjtOk6+dQE5QtB7LKV33ORmBrFTGqvO 8qJ/Ug6fXbwNi5F9KypqIzGrAMB6Bip/zuS4djeXVdQoQSpnGTP6oQghkUwh/iEST0vl KI6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549795; x=1736154595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xY41yoAggnrpKZDOvMIy0+FhpPRF9QFn+bDK7HKPzWw=; b=W6TxnZWVqoW8aWXcS1LQFKd5hYrVQlx87Xj1DrIb3JWoioSYIbKLt0dMVcRbKgowca PWuBBdutnjQGV8nFNe4k3UGF9H4W2L1VvIwjJcBukaERzAwFoYoLf8kIo+o56Hl/D4he yjrtbxxiSDGbCGGAO+MJo/MMUoCnNe56cNR0RsAIhV71ash3C82eaoaBOcLW1adkuooP JLYEp8T2BxstAnRd+nHFxwLjFwxltTTZPtIeoP+5q7+Em2fR5thSBlMKvga3O0e+qKpv pQdP238bogJu5mF18cpB4UAB/e3CHgKG0oHZPQnFMuFl+KtV6wYCFHNDZtnthbFWgRKp /PGg== X-Forwarded-Encrypted: i=1; AJvYcCUIR3fAqchKIjfybPcE2T33xk6mqi+41Q+kOp7jivPfDlRsEx/Y+GjixnPqBmzShHoEnmvI621U9w==@vger.kernel.org X-Gm-Message-State: AOJu0YzkyE4JrWEHsHXYjLtoh+zPHMlRQChp5FbprrQ46j/AEXxly8Xw QeTRdF49V+9LRzVK7vQDvFJD/2P2uKIPdCn7eorSvBgSOdDN2GK+/+2mIiWsU6s= X-Gm-Gg: ASbGncsTihL1GtUrgkPY6ts3VFwg/ZnEkMtnaK1Dd2NDqaS2DaPjXcimJW6CKCuJOjC q7OZmp0wtlK9rRyycSllJ3jQOSb8ZRWuPSQMIyYmsUdsDdIUJyvc8etI1NgsG4JFEr1rLhhRtc1 J5dqJvxgaNiDzYvv7Ss74yWCdiDzka9ePu3+00riBCYrMfNBIG+1f/2HUVVVbz64VIQX6J2mOi6 34EsF//csVfI3pH/SZ7z5a1C5NP01/dIH/sHmhnXBMSiQqNW/nXhY0CkSkUVJXfIAbUUixg2F/p UUpoyx+n1cgZJeqyNsPVVg== X-Google-Smtp-Source: AGHT+IGnRshrEAoIvJb/W9ouyd0xHkDgdKkbYKOXyUcaItniQPw/Z8/M3j6tdqefIia4rt8bUMG2kg== X-Received: by 2002:a05:6a21:328d:b0:1db:f68a:d943 with SMTP id adf61e73a8af0-1e5e0461b11mr51051580637.17.1735549795427; Mon, 30 Dec 2024 01:09:55 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.09.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:09:54 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 06/15] s390: pgtable: add statistics for PUD and P4D level page table Date: Mon, 30 Dec 2024 17:07:41 +0800 Message-Id: <35be22a2b1666df729a9fc108c2da5cce266e4be.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Like PMD and PTE level page table, also add statistics for PUD and P4D page table. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/pgalloc.h | 29 +++++++++++++++++++------- arch/s390/include/asm/tlb.h | 37 +++++++++++++++++---------------- 2 files changed, 40 insertions(+), 26 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 7b84ef6dc4b6d..a0c1ca5d8423c 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -53,29 +53,42 @@ static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long address) { unsigned long *table = crst_table_alloc(mm); - if (table) - crst_table_init(table, _REGION2_ENTRY_EMPTY); + if (!table) + return NULL; + crst_table_init(table, _REGION2_ENTRY_EMPTY); + pagetable_p4d_ctor(virt_to_ptdesc(table)); + return (p4d_t *) table; } static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) { - if (!mm_p4d_folded(mm)) - crst_table_free(mm, (unsigned long *) p4d); + if (mm_p4d_folded(mm)) + return; + + pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + crst_table_free(mm, (unsigned long *) p4d); } static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { unsigned long *table = crst_table_alloc(mm); - if (table) - crst_table_init(table, _REGION3_ENTRY_EMPTY); + + if (!table) + return NULL; + crst_table_init(table, _REGION3_ENTRY_EMPTY); + pagetable_pud_ctor(virt_to_ptdesc(table)); + return (pud_t *) table; } static inline void pud_free(struct mm_struct *mm, pud_t *pud) { - if (!mm_pud_folded(mm)) - crst_table_free(mm, (unsigned long *) pud); + if (mm_pud_folded(mm)) + return; + + pagetable_pud_dtor(virt_to_ptdesc(pud)); + crst_table_free(mm, (unsigned long *) pud); } static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index e95b2c8081eb8..b946964afce8e 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -110,24 +110,6 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, tlb_remove_ptdesc(tlb, pmd); } -/* - * p4d_free_tlb frees a pud table and clears the CRSTE for the - * region second table entry from the tlb. - * If the mm uses a four level page table the single p4d is freed - * as the pgd. p4d_free_tlb checks the asce_limit against 8PB - * to avoid the double free of the p4d in this case. - */ -static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, - unsigned long address) -{ - if (mm_p4d_folded(tlb->mm)) - return; - __tlb_adjust_range(tlb, address, PAGE_SIZE); - tlb->mm->context.flush_mm = 1; - tlb->freed_tables = 1; - tlb_remove_ptdesc(tlb, p4d); -} - /* * pud_free_tlb frees a pud table and clears the CRSTE for the * region third table entry from the tlb. @@ -140,11 +122,30 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, { if (mm_pud_folded(tlb->mm)) return; + pagetable_pud_dtor(virt_to_ptdesc(pud)); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; tlb_remove_ptdesc(tlb, pud); } +/* + * p4d_free_tlb frees a p4d table and clears the CRSTE for the + * region second table entry from the tlb. + * If the mm uses a four level page table the single p4d is freed + * as the pgd. p4d_free_tlb checks the asce_limit against 8PB + * to avoid the double free of the p4d in this case. + */ +static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, + unsigned long address) +{ + if (mm_p4d_folded(tlb->mm)) + return; + pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + __tlb_adjust_range(tlb, address, PAGE_SIZE); + tlb->mm->context.flush_mm = 1; + tlb->freed_tables = 1; + tlb_remove_ptdesc(tlb, p4d); +} #endif /* _S390_TLB_H */ From patchwork Mon Dec 30 09:07:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923147 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC5E81A254E for ; Mon, 30 Dec 2024 09:10:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549812; cv=none; b=lgr3smaS/i8QDotbKa/zmwIqfi+CTWg7WnUBprX3kq5rs5N+Wzz3UDI7eZUKCq0rhOpb3sAzpWJU0hAyeda8ytEOeE3TeTQzy5SXHhZM2F/3BGoyx0n3CqUn5zFOF3SfRwe6iz08+W+xJnRpCVigvQ90krUSC/ZXcjja8xM+0Pw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549812; c=relaxed/simple; bh=RLIPEsUjaxmI3JDqp+aULApH2pPota4RtMIKgFUGpCI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IytHCGANJLGNkxXU6obqH1VIyGl63KG/12XoBEMRLRqJLGiHSi9XD1sX3HB6Bz68TKAHMNklWUy8F3v7Nx8Zzhh5aSrqTSVjCbxfYEg0Q47P0P96+u61ZkSONDbAMlEYhxco+b7wKv7Y2gBdZzW0inwBGVe4pZOSMUzGLmXiRr4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=USvUUB1m; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="USvUUB1m" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-21628b3fe7dso110876325ad.3 for ; Mon, 30 Dec 2024 01:10:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549808; x=1736154608; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=if0a5qgfKyLZEs2UFcbOJ2pRLoezhNdJsDQpJdHhbkw=; b=USvUUB1maQ4pqZsdJmVagFvKABro5Etm6zL+fK+fq6wPD/2lsL2XOj0upmk2CHJlLF V6OiABD20OVuQK3SdjpuZ+tBBr/cPZ5dFBP0RXwDfSVqPH7Rq6LIIl85ihIIKTSHr9Iz Zn6ACS5L0kRZvFcH56pyv7OPZRiyUQhGrsHd3f/gzIl084nV+/S9jIuSnZbRT34egq7f r1fFRjDF/CcCPtwHD2vQ5RBKrhxqBwKfiG2lfbRuco39DQzhD4bFFZbM3W8Ysf3fU08K edDIT3+tlS3cVIeIhjy1lbL3AqV9dl4pgSbm+fuR7Y/zQXz3HPby8owXcZf0taJcTkL9 oh2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549808; x=1736154608; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=if0a5qgfKyLZEs2UFcbOJ2pRLoezhNdJsDQpJdHhbkw=; b=lDU7nf3TIXfmHJ03NlEIFsed8cy/d+GWuJKn8PLKiw9gbe9uTt250nQtbEzHaH0eGA w7GD2b9Q7ezgpP3Zh5JsLfOckItKd2zOTNpq0L5kiD0mmg9SjpNe3DAxlKkjUeTkjFYg TIPTn83NwW4lRwvGaMnG8MBraa9bl4jRfNvwmbzE7Rw1/MzL/XwC44FbeL1iodc9MFEV 68Sg7WJkdAuaPj84RE8NKbaZ7S4YWoMBViB4CL2v6fCLvyu/qr0wjslYfHbv0dT4O78A oyEen7I+/8xcrEPkMzqSnptJA/tGVVDAHqsApCDDpECZxM820ZzHJTLZfIH7DJQsgl7O tDRA== X-Forwarded-Encrypted: i=1; AJvYcCW0cEt719LKd7gKiKvs5Bn0LHR/iTgtjVqcLoc4i1O6LLf3wybLau9QDBnMy5wyyL6H4GphAJaByw==@vger.kernel.org X-Gm-Message-State: AOJu0Yx/v0ReAOwmy9YrSbEWllmwAkqGaMd3Zz5eUSJit38Ioo8grUIu 4QiLLYhlTfpAPMQx0l2ZcoCxt/up1MH+Rdwv6x61sBIO6/Bz3fYjSJdgRgSmau0= X-Gm-Gg: ASbGncumMyyqvnA3ap4rCbglpwmiGINykuCNYeO5EIV7COy4QPy6+XocuNE09lzmqp3 Kdx4hjjO3vov2XDf6BlisUeEwPZCKG87PUVdone5fDp36f7SchIi6d9zmzHRUQfngfwa92iGsFJ jDJYeQ8fmjrPThHB83bLS3Qo5xhgtkm1xjBhdDgmWjdZCjY1fWXK9YMjpHogbOLLEGnqmdRoCwh KugkT+VN7WGII78ooxwAme+0vJxNjT05BF6lkbOdxECYC5ty57tMpxvFGIIsXGzcySSnG4bNNxM XrELUMJcQJIwZNLfjXsSoA== X-Google-Smtp-Source: AGHT+IG+hQGJZEvqJjf0GGYYEDMNOYDO3VKGK36HZYKhi78RAlEt/pRApnM1X9rb4tsPHPP4epGREg== X-Received: by 2002:a05:6a20:431d:b0:1e1:bf3d:a190 with SMTP id adf61e73a8af0-1e5e080c83fmr53344548637.30.1735549807869; Mon, 30 Dec 2024 01:10:07 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.09.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:10:07 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 07/15] mm: pgtable: introduce pagetable_dtor() Date: Mon, 30 Dec 2024 17:07:42 +0800 Message-Id: <8ada95453180c71b7fca92b9a9f11fa0f92d45a6.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The pagetable_p*_dtor() are exactly the same except for the handling of ptlock. If we make ptlock_free() handle the case where ptdesc->ptl is NULL and remove VM_BUG_ON_PAGE() from pmd_ptlock_free(), we can unify pagetable_p*_dtor() into one function. Let's introduce pagetable_dtor() to do this. Later, pagetable_dtor() will be moved to tlb_remove_ptdesc(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) --- Documentation/mm/split_page_table_lock.rst | 4 +- arch/arm/include/asm/tlb.h | 4 +- arch/arm64/include/asm/tlb.h | 8 ++-- arch/csky/include/asm/pgalloc.h | 2 +- arch/hexagon/include/asm/pgalloc.h | 2 +- arch/loongarch/include/asm/pgalloc.h | 2 +- arch/m68k/include/asm/mcf_pgalloc.h | 4 +- arch/m68k/include/asm/sun3_pgalloc.h | 2 +- arch/m68k/mm/motorola.c | 2 +- arch/mips/include/asm/pgalloc.h | 2 +- arch/nios2/include/asm/pgalloc.h | 2 +- arch/openrisc/include/asm/pgalloc.h | 2 +- arch/powerpc/mm/book3s64/mmu_context.c | 2 +- arch/powerpc/mm/book3s64/pgtable.c | 2 +- arch/powerpc/mm/pgtable-frag.c | 4 +- arch/riscv/include/asm/pgalloc.h | 8 ++-- arch/riscv/mm/init.c | 4 +- arch/s390/include/asm/pgalloc.h | 6 +-- arch/s390/include/asm/tlb.h | 6 +-- arch/s390/mm/pgalloc.c | 2 +- arch/sh/include/asm/pgalloc.h | 2 +- arch/sparc/mm/init_64.c | 2 +- arch/sparc/mm/srmmu.c | 2 +- arch/um/include/asm/pgalloc.h | 6 +-- arch/x86/mm/pgtable.c | 12 ++--- include/asm-generic/pgalloc.h | 8 ++-- include/linux/mm.h | 52 ++++------------------ mm/memory.c | 3 +- 28 files changed, 62 insertions(+), 95 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index 581446d4a4eba..8e1ceb0a6619a 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -62,7 +62,7 @@ Support of split page table lock by an architecture =================================================== There's no need in special enabling of PTE split page table lock: everything -required is done by pagetable_pte_ctor() and pagetable_pte_dtor(), which +required is done by pagetable_pte_ctor() and pagetable_dtor(), which must be called on PTE table allocation / freeing. Make sure the architecture doesn't use slab allocator for page table @@ -73,7 +73,7 @@ PMD split lock only makes sense if you have more than two page table levels. PMD split lock enabling requires pagetable_pmd_ctor() call on PMD table -allocation and pagetable_pmd_dtor() on freeing. +allocation and pagetable_dtor() on freeing. Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index f40d06ad5d2a3..ef79bf1e8563f 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -41,7 +41,7 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); #ifndef CONFIG_ARM_LPAE /* @@ -61,7 +61,7 @@ __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) #ifdef CONFIG_ARM_LPAE struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); #endif } diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 445282cde9afb..408d0f36a8a8f 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -82,7 +82,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } @@ -92,7 +92,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, { struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -106,7 +106,7 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, if (!pgtable_l4_enabled()) return; - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -120,7 +120,7 @@ static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp, if (!pgtable_l5_enabled()) return; - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index 9c84c9012e534..f1ce5b7b28f22 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -63,7 +63,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc(tlb, page_ptdesc(pte)); \ } while (0) diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index 55988625e6fbc..40e42a0e71673 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -89,7 +89,7 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor((page_ptdesc(pte))); \ + pagetable_dtor((page_ptdesc(pte))); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index a7b9c9e73593d..7211dff8c969e 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -57,7 +57,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h index 302c5bf67179e..22d6c1fcabfb4 100644 --- a/arch/m68k/include/asm/mcf_pgalloc.h +++ b/arch/m68k/include/asm/mcf_pgalloc.h @@ -37,7 +37,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, { struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } @@ -61,7 +61,7 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) { struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 4a137eecb6fe4..2b626cb3ad0ae 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -19,7 +19,7 @@ extern const char bad_pmd_string[]; #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index c1761d309fc61..81715cece70c6 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type) list_del(dp); mmu_page_dtor((void *)page); if (type == TABLE_PTE) - pagetable_pte_dtor(virt_to_ptdesc((void *)page)); + pagetable_dtor(virt_to_ptdesc((void *)page)); free_page (page); return 1; } else if (ptable_list[type].next != dp) { diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index f4440edcd8fe2..36d9805033c4b 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -56,7 +56,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index ce6bb8e74271f..12a536b7bfbd4 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -30,7 +30,7 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index c6a73772a5466..596e2355824e3 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -68,7 +68,7 @@ extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c index 1715b07c630c9..4e1e45420bd49 100644 --- a/arch/powerpc/mm/book3s64/mmu_context.c +++ b/arch/powerpc/mm/book3s64/mmu_context.c @@ -253,7 +253,7 @@ static void pmd_frag_destroy(void *pmd_frag) count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 3745425280808..3f28e4acd920b 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -477,7 +477,7 @@ void pmd_fragment_free(unsigned long *pmd) BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c index e89f64a0f24ae..713268ccb1a0e 100644 --- a/arch/powerpc/mm/pgtable-frag.c +++ b/arch/powerpc/mm/pgtable-frag.c @@ -25,7 +25,7 @@ void pte_frag_destroy(void *pte_frag) count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } } @@ -111,7 +111,7 @@ static void pte_free_now(struct rcu_head *head) struct ptdesc *ptdesc; ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 3466fbe2e508d..b6793c5c99296 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -100,7 +100,7 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, if (pgtable_l4_enabled) { struct ptdesc *ptdesc = virt_to_ptdesc(pud); - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, ptdesc); } } @@ -111,7 +111,7 @@ static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, if (pgtable_l5_enabled) { struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); } } @@ -144,7 +144,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, ptdesc); } @@ -155,7 +155,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, ptdesc); } #endif /* CONFIG_MMU */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index fc53ce748c804..8d703fb51b1dc 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1558,7 +1558,7 @@ static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd) return; } - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); if (PageReserved(page)) free_reserved_page(page); else @@ -1580,7 +1580,7 @@ static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud, bool is_vmemm } if (!is_vmemmap) - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); if (PageReserved(page)) free_reserved_page(page); else diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index a0c1ca5d8423c..5fced6d3c36b0 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -66,7 +66,7 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) if (mm_p4d_folded(mm)) return; - pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + pagetable_dtor(virt_to_ptdesc(p4d)); crst_table_free(mm, (unsigned long *) p4d); } @@ -87,7 +87,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) if (mm_pud_folded(mm)) return; - pagetable_pud_dtor(virt_to_ptdesc(pud)); + pagetable_dtor(virt_to_ptdesc(pud)); crst_table_free(mm, (unsigned long *) pud); } @@ -109,7 +109,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { if (mm_pmd_folded(mm)) return; - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); + pagetable_dtor(virt_to_ptdesc(pmd)); crst_table_free(mm, (unsigned long *) pmd); } diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index b946964afce8e..74b6fba4c2ee3 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -102,7 +102,7 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); + pagetable_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; @@ -122,7 +122,7 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, { if (mm_pud_folded(tlb->mm)) return; - pagetable_pud_dtor(virt_to_ptdesc(pud)); + pagetable_dtor(virt_to_ptdesc(pud)); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; @@ -141,7 +141,7 @@ static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, { if (mm_p4d_folded(tlb->mm)) return; - pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + pagetable_dtor(virt_to_ptdesc(p4d)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 58696a0c4e4ac..569de24d33761 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -182,7 +182,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm) static void pagetable_pte_dtor_free(struct ptdesc *ptdesc) { - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index 5d8577ab15911..96d938fdf2244 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -34,7 +34,7 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 21f8cbbd0581c..05882bca5b732 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2915,7 +2915,7 @@ static void __pte_free(pgtable_t pte) { struct ptdesc *ptdesc = virt_to_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 9df51a62333d6..e3a72c884b867 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -372,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep) page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); if (page_ref_dec_return(page) == 1) - pagetable_pte_dtor(page_ptdesc(page)); + pagetable_dtor(page_ptdesc(page)); spin_unlock(&mm->page_table_lock); srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE); diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index 04fb4e6969a46..f0af23c3aeb2b 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -27,7 +27,7 @@ extern pgd_t *pgd_alloc(struct mm_struct *); #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) @@ -35,7 +35,7 @@ do { \ #define __pmd_free_tlb(tlb, pmd, address) \ do { \ - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ + pagetable_dtor(virt_to_ptdesc(pmd)); \ tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ } while (0) @@ -43,7 +43,7 @@ do { \ #define __pud_free_tlb(tlb, pud, address) \ do { \ - pagetable_pud_dtor(virt_to_ptdesc(pud)); \ + pagetable_dtor(virt_to_ptdesc(pud)); \ tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ } while (0) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 3d6e84da45b24..a6cd9660e29ec 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -60,7 +60,7 @@ early_param("userpte", setup_userpte); void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { - pagetable_pte_dtor(page_ptdesc(pte)); + pagetable_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } @@ -77,7 +77,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); } @@ -86,7 +86,7 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) { struct ptdesc *ptdesc = virt_to_ptdesc(pud); - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(pud)); } @@ -96,7 +96,7 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); } @@ -233,7 +233,7 @@ static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) if (pmds[i]) { ptdesc = virt_to_ptdesc(pmds[i]); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); mm_dec_nr_pmds(mm); } @@ -867,7 +867,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) free_page((unsigned long)pmd_sv); - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); + pagetable_dtor(virt_to_ptdesc(pmd)); free_page((unsigned long)pmd); return 1; diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index bb482eeca0c3e..4afb346eae255 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -109,7 +109,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { struct ptdesc *ptdesc = page_ptdesc(pte_page); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } @@ -153,7 +153,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) struct ptdesc *ptdesc = virt_to_ptdesc(pmd); BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } #endif @@ -202,7 +202,7 @@ static inline void __pud_free(struct mm_struct *mm, pud_t *pud) struct ptdesc *ptdesc = virt_to_ptdesc(pud); BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } @@ -248,7 +248,7 @@ static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) struct ptdesc *ptdesc = virt_to_ptdesc(p4d); BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/include/linux/mm.h b/include/linux/mm.h index 5d82f42ddd5cc..cad11fa10c192 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2992,6 +2992,15 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */ +static inline void pagetable_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + ptlock_free(ptdesc); + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -3003,15 +3012,6 @@ static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) return true; } -static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - ptlock_free(ptdesc); - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) @@ -3088,14 +3088,6 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) return ptlock_init(ptdesc); } -static inline void pmd_ptlock_free(struct ptdesc *ptdesc) -{ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); -#endif - ptlock_free(ptdesc); -} - #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) #else @@ -3106,7 +3098,6 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -3131,15 +3122,6 @@ static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc) return true; } -static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - pmd_ptlock_free(ptdesc); - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be @@ -3167,14 +3149,6 @@ static inline void pagetable_pud_ctor(struct ptdesc *ptdesc) lruvec_stat_add_folio(folio, NR_PAGETABLE); } -static inline void pagetable_pud_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -3183,14 +3157,6 @@ static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc) lruvec_stat_add_folio(folio, NR_PAGETABLE); } -static inline void pagetable_p4d_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - extern void __init pagecache_init(void); extern void free_initmem(void); diff --git a/mm/memory.c b/mm/memory.c index 9423967b24180..ad871e564568b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -7051,7 +7051,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc) void ptlock_free(struct ptdesc *ptdesc) { - kmem_cache_free(page_ptl_cachep, ptdesc->ptl); + if (ptdesc->ptl) + kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif From patchwork Mon Dec 30 09:07:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923148 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 315A01A254C for ; Mon, 30 Dec 2024 09:10:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549823; cv=none; b=b+fBaY3YT/Em1e41AYD3kOvN3JvLHxUGB0iFNLATQzVgNRvf27lRfoFpWZqRmD+N6QBQshO9aUvbczPTRf+KxLdN0OPkQHJoASGqhx2sT6JzjPZEBQbih1kNDW9E8z4fsSojqdQyyzudr78BmCJywd3xn3coZMnCRk1sUdKYJn8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549823; c=relaxed/simple; bh=H/x1u07M3HXF9oVd+o3reuerQgd2pca8MMjlY49GFUc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cK/riNZlDvpNCIkqyOEWI3J7NFq225f5JXC28pc/WhPqfa1kvksH20R2GaAqL0keHR99OB0Ivd8Vo0ISeaoE3qdXEiighkcPXFMpeBXlacOrcbMVuwT+kvqubBz2Pi1kI7fndS0dlKDknNoIUBXqxBORUq9Igm/qrgcENa60aNY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=QtFMIkv2; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="QtFMIkv2" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2165cb60719so116239435ad.0 for ; Mon, 30 Dec 2024 01:10:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549820; x=1736154620; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qA0mhWnXo6upWemI3POPEc6VJyAkyxFJggkOB6WNq2k=; b=QtFMIkv2dn6svHZ1SXPQ4UcYxOWhSxEtecpc9ZcR84UJIkvDc0nxxdiBEQqkLzNte+ cOxd6huGH1XR7kB1cB/zsiN7Qx34VpSyDjv955svhVtX5ie0cLqDc+DpV+m3aY0sncw1 tWKBxDXeRR8moOHkZgP3jB7Zs12UvZbtQnBi8xea0wQiKkQKF1v7PoGlx7OKqV1QBzbr JMa1XWurnJ+ziDTJnN85CxxdfmjzNS2FWd1eRtGaMIIYQ9zDgH98o3haRQTCSQbHZQv/ DZ9Q2RF8X4RYJ7RuPOwnMeo6uwMeFpZeYBos5Bj51v3vzvJDZjzS0uu0C/dhUOfW02Tf fZWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549820; x=1736154620; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qA0mhWnXo6upWemI3POPEc6VJyAkyxFJggkOB6WNq2k=; b=iCWafmPBYGZfkwBhy0QcM0FTmm5jyR2z5Ob5pNrBZzLybRWdm9Wi2x5qibpHpj0YNf vEilhHlIddL+KnxbI9lrYqvPP9T7HlVz0bTmuNwZ5s1a1ti6QaGViZHBp9VIaUaUTgid Y+ZlZH57T3TXp6Vl6p1CNvydPU2Z4VGIuO/aWvG8zoSQyuKyxjlrHXk/VonKb++IFpWG 3m0pMMCFsj3Sti3J0bcXWqzo1WmY6Splmp5PCBovUN2lAW6fr8Xo9tYpYk/p3uSe/LwQ ndwAOdDl8b7U0/G6UWjFL0pnWCDWw+pHiRjzrxv3Ucb55e7pSz45ibjJuf63e1FRRj0a s0QQ== X-Forwarded-Encrypted: i=1; AJvYcCV1dKeOJK8nT5FRdHZ/N4Pq4IWQOoMe21sfL9GXnE0vQWaUQGojEleovCGrBzfPxcgaPgbVVhC/NA==@vger.kernel.org X-Gm-Message-State: AOJu0Yy18iiOruSEYiBL8miVHMwN7F0hgT4iKQrIheIgweu9nJOSLJSK Pp5BVCsLW6QsqznUS1jAIdMpPoTV+/DCxDvtLgmk6hUPYH5mhYwvqBxFIWAbTmU= X-Gm-Gg: ASbGnctKdgZG3SmgelkZqert6TV4bD/5gY2TakTi0F2xEBPQtx8JvxD2MLLroy1P9KA 7DMfGGHOVUiMHi7wlmvhZ+ppN0khO16ezVl3tfC+oLzF+YdFKmMhpyZavAPfJX15AcLeKwBAple mx1/UJPbxBTs0pInfjSYX+NcP6/sej/vcTKpRt62k8snE7iWbotHIqWdR88WKVK/o6cNPCaCGRS LNkz5wl9Cl3no6tLL1LvqZcOLxM0qEJ/bTTN6pC28LhkGVnIB3FXJCuqqcriaLm6AqV5TyuN40m FQoc5muXVQ9V1Tl51znrAQ== X-Google-Smtp-Source: AGHT+IFX/3Df8AQRXbgF3uoP6h0ZSX98E/foDuPWt5IdgjLk00IpLZqU8w+lhMMb/JJOCncpjT7B0g== X-Received: by 2002:a05:6a21:670b:b0:1dc:7907:6d67 with SMTP id adf61e73a8af0-1e5e081c8d0mr60502051637.40.1735549820172; Mon, 30 Dec 2024 01:10:20 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.10.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:10:19 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 08/15] arm: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 30 Dec 2024 17:07:43 +0800 Message-Id: <955162bfbbcd9fbb3b074e1fe2aef4f64b61d6f9.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm/include/asm/tlb.h | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index ef79bf1e8563f..264ab635e807a 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -26,12 +26,14 @@ #else /* !CONFIG_MMU */ -#include #include static inline void __tlb_remove_table(void *_table) { - free_page_and_swap_cache((struct page *)_table); + struct ptdesc *ptdesc = (struct ptdesc *)_table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #include @@ -41,8 +43,6 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_dtor(ptdesc); - #ifndef CONFIG_ARM_LPAE /* * With the classic ARM MMU, a pte page has two corresponding pmd @@ -61,7 +61,6 @@ __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) #ifdef CONFIG_ARM_LPAE struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); #endif } From patchwork Mon Dec 30 09:07:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923149 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B0501A38E4 for ; Mon, 30 Dec 2024 09:10:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549835; cv=none; b=pibR4tVQX6xFhHMkG+j1ry5HBUC3nnhZvXLBbRTExnipOFNLudSEVxrWm5sRP+BQ1uCfIurGAf/l5uzuq/6iSxZY7Gh6q3WeETg7DFvQsL3+C5Af7mErvXZWGG6XhXbSdT3WOwq4ysX7RXYNKeZiyTcmB3FkabOIA3tAWaFfixk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549835; c=relaxed/simple; bh=w9Tb6kkg2Q49UGEfUZpSUyo28N/wow0+21NvTYUEzY4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JSRYFbhFOrx7jK73Q7YrQiqAjkEwR1AGgRzm9lEieXhc6z1y8e9YIhTc1Gh9rPqVwF3iDaVtCSEF29gWIIPzAtmS/2/WvlEMFCucXzrpgF0LLZIboVxOHOT7OGbBHTTTvV4PWBbrSCzIYVTBqcdU9tBv6g0wUobJEIzP24lOvBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=lpmdJRKR; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="lpmdJRKR" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-215770613dbso83815515ad.2 for ; Mon, 30 Dec 2024 01:10:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549833; x=1736154633; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0nJ0bdEaVLB0+13gaCQHdoBfFkuXXT3Pqta7GOjv8Bo=; b=lpmdJRKRNSjV6jzealahBTbWBkbRefJLZhZvysU1hXvoXIMV6ujKiH8TOkeJeYR0jn vt+weg1F53fTu2H67buFBHFnQ7Aw0sKwfjNfo39vDGDENOhy2pNOeE0MgzO208rDj8GJ +vRwboC/mFKKNu4NJWYZ2fbkTzZBY2eFpYPCBuBibvTgMofe/hSMFP3MqeDAGDT20Q5l xoKD6EkwtzB2udxGv9zAku3XDiJcK7OXnblIDB39BEFQGdhkvy9p+UYtIwapYnx4ACeL 0xJ8gtWzRxKDQuJW/wRjeaLkGrtQACzSj0SRSrJiF2Ta24jUIsPARuTNR8mP0s63NmQI PGbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549833; x=1736154633; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0nJ0bdEaVLB0+13gaCQHdoBfFkuXXT3Pqta7GOjv8Bo=; b=RL6+z697nieTQ01XDWEHLHAwHcbBRRqqDYksn+hgWGy/BGPwOCyV5ni4jemo02I7QD IoivDqm/R1ss+4l9ZbhWzQ8uDWuLuMd8LlkNKgfpqhkwB5zORJtMG4U+vk+Ljr8vAhqd EvUomSRBHq8wd9triyaea/1DsQed4SFLlwQoyR3vPW7x2gLHti5RQ7x3iRzY6kETJsSn OA1z/uc8fTgGPVEZC9n5Vn2FIYMjonrTeTfm3oF2yPbNu/DJIwl3TcNVRQwEb5HlRfjd 1JUGW3lwJiZ1vinsVUp50RDiDQRFe4zMRb6zyahDx1c7+3SIEVfwcaIm49TW9dIFfi6I l6Ig== X-Forwarded-Encrypted: i=1; AJvYcCWU1i9BzLVKNZnFPUNRBayhcNgrLj9gytjukinZvaKGMRFIEWP/rxsgyPvgXSSmgk07nQ55YYw09w==@vger.kernel.org X-Gm-Message-State: AOJu0YxuqKm2hvAyN3DEOJ4MqNNWZcnp1R+8i2YoT7BZ1e1bJkJcU/MX ZG5DNzDzdab1BsxlgmoX9uq+7FapJ8osdHM94VT7SO6mtQR7eyPonbBnyQgoFE4= X-Gm-Gg: ASbGncuScRLZxZbB94feGTmc9uAbyRHRDNqjf1xDZ1iC7NusoXZpp/JQzcDH0dEb7DC ElxMA26UGLNACFna93L48wWO2wh9cY6Xhnqs1dia9SRP4B8AKgbOB6KTfOmyPTYk4w30SHU4mB0 jwG7K3qTvEahQbepFAVW9+xbF6KxQssNdPV5i1DczLOnnYOJ6WuaMOarLNKP3LqCvK6KRARy3VQ yoyqyy7qSgUzRwiChehup2QpX2mLxjjbR4rhZcUwNYI0lHjkRRoz0YR+KDB7Ygd54m27fG8jdTf Xkbe/NgJFIfbsV/RoWKMzA== X-Google-Smtp-Source: AGHT+IGwnjGmNRtsc6wvIZAIM8FV7PRxiURIME1KaVRQ0EplG8wu3sXxm46oGULh37KJjV0x+YOVDQ== X-Received: by 2002:a05:6a00:2181:b0:725:8c0f:6fa3 with SMTP id d2e1a72fcca58-72abdebb85dmr42631070b3a.22.1735549832564; Mon, 30 Dec 2024 01:10:32 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.10.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:10:32 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 09/15] arm64: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 30 Dec 2024 17:07:44 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/tlb.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 408d0f36a8a8f..93591a80b5bfb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -9,11 +9,13 @@ #define __ASM_TLB_H #include -#include static inline void __tlb_remove_table(void *_table) { - free_page_and_swap_cache((struct page *)_table); + struct ptdesc *ptdesc = (struct ptdesc *)_table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #define tlb_flush tlb_flush @@ -82,7 +84,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } @@ -92,7 +93,6 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, { struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -106,7 +106,6 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, if (!pgtable_l4_enabled()) return; - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -120,7 +119,6 @@ static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp, if (!pgtable_l5_enabled()) return; - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif From patchwork Mon Dec 30 09:07:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923150 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4DE11A2C0E for ; Mon, 30 Dec 2024 09:10:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549847; cv=none; b=ZI8AGrkWTTdZ/2FDe2fXRzjr31WI3WHogorhv+1B/FDj00zC5dCYphtokENBLMdajJ0y20NlQ7sWVK4AL7Yc9vNu8DfU6s6qwq4K2uEG10qpmSnXfW/FA6zI3atz/B6wjjg+ByVtUj3GkC/tzXt0SkWBUMm5IQuLNQn2kvf9GEg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549847; c=relaxed/simple; bh=d8h9y7EmWepeeSdJqmfwFJOw7y0NPJCnETYa2k/Hjdk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=a+/693HmvzGpoFOx3fC250JlHw3pxFl7mxWAIf56D5QkrGa3eaYLLUprXj3hnFHjwRBF7EyJ24+ecE6FpqnpJRzLzMxzKCBWGazmbDnyc14rv0cjaOB7Ai4FQkkms+gDa/aKgEzHRznstbU0wdQGyVWKJt0gnNPjp4VmR6coYJU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=E0YV3z1o; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="E0YV3z1o" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21644aca3a0so17748745ad.3 for ; Mon, 30 Dec 2024 01:10:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549845; x=1736154645; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=76YtbImY0jPwdXnQQVs9oYKl35yZZ81uc4A2YpilpOk=; b=E0YV3z1o6J88dE+BZfcXArcljLAUcY1glPwY/nABe6LJjkR4TYX34yIv7UCBMoHL/v 56Kx1oTLY1MOnZZTPkZU6gpbhhMZ//gdkYO0oS8E5KQTELKjm6nhkSUS7HUOYf4P615j fIe1YOqDVFnQ06oRVjT/A0AZ9vh6ZYcn9bpxcPRg/OyBQUM7glJiutzJnlXrBZRpdbji jQpGD35lWzcMr4V1rUHTuYfnIvzZaGIdwwHy9AQs+b31VM73IKNIOgjuvjuywhS2Tl+v iWdRFp/QS3MYhcfai3DREA901AzzowHG/liMw44dfKSxLGFJixByBePS3gntApnYlbEP uTBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549845; x=1736154645; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=76YtbImY0jPwdXnQQVs9oYKl35yZZ81uc4A2YpilpOk=; b=EgR7/IZFObnRQYf+lupISbe6vjyXXnTYxp0T/BzU15pZBGzZe0pQmPG/GbTIxs3buZ sxd/b4eS/AYVwGbcpuY1WNpliZw+tIWBk16HPEPzjLC85UGI6yZtTRwL3e4l9tDj8KU0 MHWn9BJyWu99BXYxPI+352JcbJGMjtllWehi/xbh+tPs9ZLmVrAsvSGclKex1IRsp6ss sShhf9gsZem5dFcWTFT+NY7BZg7nfGTkJiSm3z5bY/JPraSYW/bU/qBrmhwhIQwXrhvT VwD6yYCc098dW12xC0NX9lPojZZEJb4FAE8HTKhWRBAaj6g81mhege28IhKtgNDSnxSf sbbg== X-Forwarded-Encrypted: i=1; AJvYcCWLMolkazrNy/f7HjB15OarV1atjphd8dGrJHF5MLtZy7YllhklFVAr09ymy5XMUyauFXovxclKWQ==@vger.kernel.org X-Gm-Message-State: AOJu0YwmGIa/Zj64Xz/MpZOkRcyt2l/5/k/PbZIsmnIkdFl/CitLrIaH plygVoY/ZggBBx1KIa9BjQmd+olV9tKm0gou92C3qKlbBEFQK41fpvcyZAmJ0fg= X-Gm-Gg: ASbGnctBXr4zvYtstzT1knsRcuwmHEH/rhGHl1Xy1x0gGJ2+PBsjr88Dx/8YjKxmvgQ rx/lpiwF2OxTdfyzxUx2LKuHg1X40q8ZOr1mMpGCCp8FTyhFrbL2zRAhCZap6SYSBwxk5e2MnNB ywGLN773pVyyFq7DeJ9w/q5WkX1wrdjP8QOHPUK4HxHDARBlptYEyFLPitisMidsOYad60JGzor j+09Y8pQpWlVkLi0znCQFr6k7xr0A+aM5GrI1UWkZk9YqyGdP4g5R2kHG/kJpjiOeTnyfx3ICWL Jccws05jhdAyVfloZ6nlpg== X-Google-Smtp-Source: AGHT+IE9evGbD+yEfyO2G41S4amTNhC2vfWaq9UYV9WQJrC+a4l06PyKmdWx4KtX6MVeEx+cVngAAA== X-Received: by 2002:a05:6a20:c91b:b0:1e1:d22d:cf38 with SMTP id adf61e73a8af0-1e5e05b06bbmr51533885637.21.1735549844993; Mon, 30 Dec 2024 01:10:44 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.10.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:10:44 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 10/15] riscv: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 30 Dec 2024 17:07:45 +0800 Message-Id: <0e8f0b3835c15e99145e0006ac1020ae45a2b166.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. By the way, move the comment above __tlb_remove_table() to riscv_tlb_remove_ptdesc(), it will be more appropriate. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-riscv@lists.infradead.org --- arch/riscv/include/asm/pgalloc.h | 38 ++++++++++++++------------------ arch/riscv/include/asm/tlb.h | 14 ++++-------- 2 files changed, 21 insertions(+), 31 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index b6793c5c99296..c8907b8317115 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -15,12 +15,22 @@ #define __HAVE_ARCH_PUD_FREE #include +/* + * While riscv platforms with riscv_ipi_for_rfence as true require an IPI to + * perform TLB shootdown, some platforms with riscv_ipi_for_rfence as false use + * SBI to perform TLB shootdown. To keep software pagetable walkers safe in this + * case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the + * comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h + * for more details. + */ static inline void riscv_tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) { - if (riscv_use_sbi_for_rfence()) + if (riscv_use_sbi_for_rfence()) { tlb_remove_ptdesc(tlb, pt); - else + } else { + pagetable_dtor(pt); tlb_remove_page_ptdesc(tlb, pt); + } } static inline void pmd_populate_kernel(struct mm_struct *mm, @@ -97,23 +107,15 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, unsigned long addr) { - if (pgtable_l4_enabled) { - struct ptdesc *ptdesc = virt_to_ptdesc(pud); - - pagetable_dtor(ptdesc); - riscv_tlb_remove_ptdesc(tlb, ptdesc); - } + if (pgtable_l4_enabled) + riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(pud)); } static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { - if (pgtable_l5_enabled) { - struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - - pagetable_dtor(ptdesc); + if (pgtable_l5_enabled) riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); - } } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -142,10 +144,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) { - struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - - pagetable_dtor(ptdesc); - riscv_tlb_remove_ptdesc(tlb, ptdesc); + riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(pmd)); } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -153,10 +152,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - struct ptdesc *ptdesc = page_ptdesc(pte); - - pagetable_dtor(ptdesc); - riscv_tlb_remove_ptdesc(tlb, ptdesc); + riscv_tlb_remove_ptdesc(tlb, page_ptdesc(pte)); } #endif /* CONFIG_MMU */ diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index 1f6c38420d8e0..ded8724b3c4f7 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -11,19 +11,13 @@ struct mmu_gather; static void tlb_flush(struct mmu_gather *tlb); #ifdef CONFIG_MMU -#include -/* - * While riscv platforms with riscv_ipi_for_rfence as true require an IPI to - * perform TLB shootdown, some platforms with riscv_ipi_for_rfence as false use - * SBI to perform TLB shootdown. To keep software pagetable walkers safe in this - * case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the - * comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h - * for more details. - */ static inline void __tlb_remove_table(void *table) { - free_page_and_swap_cache(table); + struct ptdesc *ptdesc = (struct ptdesc *)table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #endif /* CONFIG_MMU */ From patchwork Mon Dec 30 09:07:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923151 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0C4A1A4E98 for ; Mon, 30 Dec 2024 09:10:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549860; cv=none; b=iSMh6Qc5nANLYu4iHC3XnHvEAe4vTo1PZu2huRNymlwOthIr1gn7gjyVylY+84lPOTHxak8JVHyaxejt4LDGguIPup07fRs+QQymjwYLb3IzU5A7Iv/iEL1sAFKcV7ztaNqe2nV//IWExTGJEh+d289B6xO8Fu3nM5OwzsimdUo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549860; c=relaxed/simple; bh=0DrkqJj4wQeX5ghzMY0LbiVqPzUVuk7cM17qWFCGuf4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ew08IYURryMsvAFCDVSej6icPw6/ub62l7PI1bZ452Y/8oBszKQ/mdAVrFOT+OdG12cOy0/FXcM7RHjAjUolOtLtmUSIBbb1P6hnlLGb/fwHhDFUxD2n92Cz+oVH7NQDNig3wzdCNlVruCcowiURnN516mM8M7+ALEzwChreBis= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=lYkoNd7F; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="lYkoNd7F" Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-21634338cfdso167233305ad.2 for ; Mon, 30 Dec 2024 01:10:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549857; x=1736154657; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Rs6Lkouarv58ZoKb+WtHLVo9G63kikGgJGrD7ursbxc=; b=lYkoNd7FgWoHDL3Jfvlgra0o5dn+A9l/ySi9Em97ziD4wMmPDTy14BOU7dEDhA2LI1 6Ez6UWvnADqjswxV260ANzGE5+96l/uqHQ2ZCCsYvBIxorq2VsjW4e/5wzlUwtrEyMjg YMnp0VnzQES6RIPoIOJBB2GfilVAHfcptwA5gkTCHU5miK/0rZhKNeEbmapgvPizZ0gJ 9HNk75B/dcJ/Z8IScpL479Y/GSkhFUOz6eRxLAy/wBwIPAD7sLsepuh8fGdcpT/Ll0mU zu+K+EmaVnI7eYFdkAkRnzxvInR8DvApwQVDriwdoygPNx3kigYYMbj4WyAwaeRCpsF+ Q0TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549857; x=1736154657; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Rs6Lkouarv58ZoKb+WtHLVo9G63kikGgJGrD7ursbxc=; b=YKmV+MuG52sM+NeuS46sNs92IecjpDk1ecRyy8WQ/JvWJ41W9dLGWZ59DDt4bt1ynA 3P2kQauwvuV69PsSOdzv7nVR44lkOkvMd1+6fsdaVyWrhc3eQr78MK+o0hhz4B31wuQg ipO0010upwdwkQEZKdJKFua41CWNwZ29IL49Z5ldyPXCx21S6g7Uh3wMLRTIRYsLuUbV cQFhK9mj/wrZWl+3zaj7ttfkoSUYtdjQB1JslNysyBWbxf2g1qrxp/IHd307fd9QGtZj kQVO+aHStRfzLvP5nd/soNQPImi4A+5Er52BsLui534gFtBNFDMTYrE4IYYHSiO09dAN P8Kg== X-Forwarded-Encrypted: i=1; AJvYcCW/oJ5NqLr2Yo40sbHdbtGDh1VaLuFB8uekqAAkvC3ZuyxOM/ilj+7Aw1mtqCM/flCj9yAKdFFfNQ==@vger.kernel.org X-Gm-Message-State: AOJu0Yxz2SAmtNIHeqRepv/PK1T2tMhYnMWF/IzXG8se/BV1ElFzt00Q 96zo2t7h7pH4Pa1tszLBkISrlmLceW8Q3+MCzR9WuOyQsWH62DMkHnModudbGKA= X-Gm-Gg: ASbGncvl6sIDZZXP2eguHAWfegLR+z+kKHr2WyA+3+JFpHc1/QSTVXCFsnCGJfefz/R SNeKdNjmkLoobFoq+seIbR7gAjdVA+sfSgoJQDssfIcH7+6m2kUe3kCmzZPD5TW19gqZTcpGW9d GhacknmDBj7fBIbGSrzJixHR8/08iHy0nDNA2g8bIxbMgLd9TuRXr/UVVzu7a7D09ERptJoudQa HP8dUJ/yHZFmwg5J1RhGdlEfz72WdIOC1P36P454PbqEAeYcJXvSuhOdn4jRPpptfwecSRluxhG Fv46Ue/D6mmoAeFytUaZMw== X-Google-Smtp-Source: AGHT+IGZk471dqtvPjYzzhjuNb2uivM1pgmRrcDve4hlPbWGQce+RD+FtIOnCoiYNP05sabQ4PQAQA== X-Received: by 2002:a05:6a21:8cc2:b0:1e1:f281:8d36 with SMTP id adf61e73a8af0-1e5e0458dbdmr51544115637.10.1735549857384; Mon, 30 Dec 2024 01:10:57 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.10.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:10:56 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 11/15] x86: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 30 Dec 2024 17:07:46 +0800 Message-Id: <0dc5a3bf5a692e24379c1d3b879a6d4396f0dbbd.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: x86@kernel.org --- arch/x86/include/asm/tlb.h | 17 ++++++++++------- arch/x86/kernel/paravirt.c | 1 + arch/x86/mm/pgtable.c | 12 ++---------- 3 files changed, 13 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 73f0786181cc9..f64730be5ad67 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -31,24 +31,27 @@ static inline void tlb_flush(struct mmu_gather *tlb) */ static inline void __tlb_remove_table(void *table) { - free_page_and_swap_cache(table); + struct ptdesc *ptdesc = (struct ptdesc *)table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #ifdef CONFIG_PT_RECLAIM static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) { - struct page *page; + struct ptdesc *ptdesc; - page = container_of(head, struct page, rcu_head); - put_page(page); + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + __tlb_remove_table(ptdesc); } static inline void __tlb_remove_table_one(void *table) { - struct page *page; + struct ptdesc *ptdesc; - page = table; - call_rcu(&page->rcu_head, __tlb_remove_table_one_rcu); + ptdesc = table; + call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu); } #define __tlb_remove_table_one __tlb_remove_table_one #endif /* CONFIG_PT_RECLAIM */ diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 7bdcf152778c0..46d5d325483b0 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -62,6 +62,7 @@ void __init native_pv_lock_init(void) #ifndef CONFIG_PT_RECLAIM static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) { + pagetable_dtor(table); tlb_remove_page(tlb, table); } #else diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index a6cd9660e29ec..a0b0e501ba663 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -23,6 +23,7 @@ EXPORT_SYMBOL(physical_mask); static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) { + pagetable_dtor(table); tlb_remove_page(tlb, table); } #else @@ -60,7 +61,6 @@ early_param("userpte", setup_userpte); void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { - pagetable_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } @@ -68,7 +68,6 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) #if CONFIG_PGTABLE_LEVELS > 2 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) { - struct ptdesc *ptdesc = virt_to_ptdesc(pmd); paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); /* * NOTE! For PAE, any changes to the top page-directory-pointer-table @@ -77,16 +76,12 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif - pagetable_dtor(ptdesc); - paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); + paravirt_tlb_remove_table(tlb, virt_to_page(pmd)); } #if CONFIG_PGTABLE_LEVELS > 3 void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) { - struct ptdesc *ptdesc = virt_to_ptdesc(pud); - - pagetable_dtor(ptdesc); paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(pud)); } @@ -94,9 +89,6 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { - struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - - pagetable_dtor(ptdesc); paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); } From patchwork Mon Dec 30 09:07:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923152 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 807C41A3A8A for ; Mon, 30 Dec 2024 09:11:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549872; cv=none; b=qK0GBZ2XO/aC931Yb1k88GwXv59jVoBUGyPPKrh+asfL1TMcHedtUWOC/YkQcBa+XwAwrMLRHi+whzjuorOPuI5pS/pW+l6DvWIsgjxdWpiHX9BTORerEKi3Olm+8Z6RteTtRtP5YDA+zI0j3PltvIQds/mImo5Ww5zwRBNJx50= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549872; c=relaxed/simple; bh=ySnIl2i3Fo/t1YN/rcwyhGqxWwlx1TNUJRz33ZWxlkE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rTDeZoZiOrUxHmq4oeAOfUBhYvDrPN8BJTIIZtVZ6RI7EaGWeSmYKOyg4Rc2rGERm0Pp2Tktv9JaTz9kHvnJl9an4IBAA8TR7mHAyjHMvL6fa5mO2z9gcxfqWWx/FrRmotx+cinSXtCfvhLAC4Nlh4lYZeGDcMGkyERwW3h79JQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=CbvM39v8; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="CbvM39v8" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-2165448243fso138922245ad.1 for ; Mon, 30 Dec 2024 01:11:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549870; x=1736154670; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jyEzElX2/Cl2Hb0vACePBo9dDp7jwROhL3d0Mq9f7ig=; b=CbvM39v8DSxX3gTbaGPkNFXL2gTwNFbnSvW9hMdWzoZdSAtW4hbM7cGR3bLAME+DTg eLqPZxIF/CeaTOx8hjA0ITXh4+xGxzfJzxuRHVzPc5Ap2qf00l4doFQn5sAjWsiVzYuu gxk5PCTQ/K8GFKmxsi4rbVYXXE/QXdE1bbaZzuxLrh/Tgo1c3W5H0qNOca4nTF8TS4t5 LkCaPMhQtv3BdNeDF9l6NsVnG/1Gz7Bcky/2IBBDRGbanH5Hqdn2K2xkBn44LLuhyTlV Z5d+lDOSE2Oq+ggPmtp7yE6VfDvavDoBoJfe7XHWBEWbvnm0PspH8WikEnP7tCpj5j0V 6kng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549870; x=1736154670; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jyEzElX2/Cl2Hb0vACePBo9dDp7jwROhL3d0Mq9f7ig=; b=KISkqCQ45Kp2Te8e7g5LQiho8i0moiuIKbyMinYqwJ4KzIijq+yJcJpSU7EhuiBAgb HOoz+lFzikII0RE9zJpc/cgjP+bq0BMH+0nn1GfUal9ClvOcCL7IfCLwl9caAMZzElBG VdpJ4IU2LKfavaBf+ymCCMx5t7b4hxz+7LYd+HPN84tslbfT0sSnHMtdLlVYNodJJo00 uOkd3ibLQmmUC0Fh/QIv+RLkAeO/t9ENRa14cKIxAlMBWhV/dO/+6UkLrQSgt7iC3AxN l46sDSYeFhV2z1D8wXXgn/aLVbtLwr4zbtzCfQCB7XzDP49/TgWjUrWFwZR2FlraSTuL 0eqA== X-Forwarded-Encrypted: i=1; AJvYcCVkjlrn9IFZ0mXnXFXwOEl09I+z1KFRwjVhsCZO2Ib/erDo1pMfigODn7KNGSaF31g/AixSkV+4MA==@vger.kernel.org X-Gm-Message-State: AOJu0YyZnTqjQn+KEDD5mfT2cvlr3xWC0dXbhOKorWYlDsIAtxRP9CTa 6qzP5IfYQrJixX5T+26j3bxP4TtjW6b+2k12ZZybEzrq+G0ZtDrg5Bohcaz+3V4= X-Gm-Gg: ASbGncsK2Y0ILYvx839qCbF0Hyy9byCfuVvI15zLJWvrOj3QXJo9EPhT62Tmrbr3dD/ qOOR4Q4QjGbtCspAxZWpdXUiRUHep9Ss2MBDic9JUfieGz/TS4g0lUWEsCXVQ+CxCF2vcyrzFzQ sdx7WvSLEw010Ivq21KHxHFH/D4wF/xtg8F/gNgEQxsf36SFj2b1qo7uQY8HpdlBcRk5W6EpVWl eocCR/ReuzXvbtM8Qng+MFT0AwAKS1CkNt0iK/LAzEVF3mBzhbysy9fmb/p1aSfsS8cpPfm2l0E A4Kt+RIp41y9ZmbRKFFOQA== X-Google-Smtp-Source: AGHT+IHPMMlkiFtH678yJhisFXpRpeLbEmYMWQPlXwvGfvt1NUv1TR7XGzXk5FcrgJpzIcDaQ0LX6w== X-Received: by 2002:a05:6a20:7351:b0:1e0:cadd:f670 with SMTP id adf61e73a8af0-1e5e044c8ebmr53183808637.5.1735549869780; Mon, 30 Dec 2024 01:11:09 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.10.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:11:09 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 12/15] s390: pgtable: also move pagetable_dtor() of PxD to __tlb_remove_table() Date: Mon, 30 Dec 2024 17:07:47 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 To unify the PxD and PTE TLB free path, also move the pagetable_dtor() of PMD|PUD|P4D to __tlb_remove_table(). Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/tlb.h | 3 --- arch/s390/mm/pgalloc.c | 14 ++++---------- 2 files changed, 4 insertions(+), 13 deletions(-) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 74b6fba4c2ee3..79df7c0932c56 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -102,7 +102,6 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pagetable_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; @@ -122,7 +121,6 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, { if (mm_pud_folded(tlb->mm)) return; - pagetable_dtor(virt_to_ptdesc(pud)); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; @@ -141,7 +139,6 @@ static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, { if (mm_p4d_folded(tlb->mm)) return; - pagetable_dtor(virt_to_ptdesc(p4d)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 569de24d33761..c73b89811a264 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -180,7 +180,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } -static void pagetable_pte_dtor_free(struct ptdesc *ptdesc) +static void pagetable_dtor_free(struct ptdesc *ptdesc) { pagetable_dtor(ptdesc); pagetable_free(ptdesc); @@ -190,20 +190,14 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) { struct ptdesc *ptdesc = virt_to_ptdesc(table); - pagetable_pte_dtor_free(ptdesc); + pagetable_dtor_free(ptdesc); } void __tlb_remove_table(void *table) { struct ptdesc *ptdesc = virt_to_ptdesc(table); - struct page *page = ptdesc_page(ptdesc); - if (compound_order(page) == CRST_ALLOC_ORDER) { - /* pmd, pud, or p4d */ - pagetable_free(ptdesc); - return; - } - pagetable_pte_dtor_free(ptdesc); + pagetable_dtor_free(ptdesc); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -211,7 +205,7 @@ static void pte_free_now(struct rcu_head *head) { struct ptdesc *ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - pagetable_pte_dtor_free(ptdesc); + pagetable_dtor_free(ptdesc); } void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) From patchwork Mon Dec 30 09:07:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923153 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADB9F1A83E6 for ; Mon, 30 Dec 2024 09:11:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549886; cv=none; b=samCKi72ml1KV9OkWgQcjfcIm2DF+jyRuyKxcZ6v9rAbNHoeReLC4kOaYzPudR4C2wxmeVpnP9CX+2aJIKN1o4t8Dwi8zVp/4zvJaFa3TqSW2dW7J6m06EuUMJnPHyJzTMUtEo2uJOONH/1BJfUPaNiIe5w3ejQ/Kj3XCXGv6yA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549886; c=relaxed/simple; bh=EElIB1gpaaG9REY1IEzvLAKdR3Au6Zf5cYxlQP4u0WQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=l1k0/tMJUAvIW53g6pl5eUAL/crbCUbaZK7pT8c+AkbAzETUa7LANbadAO1qDEFf/aBRlj7ole8T282lGW4KmQvUsTP7C7MJXr1deEwmOBDf2oY0+SnOTh9rq61dYi9DWki4jywMznUK5DKQbC0H2BCCSwEOvVDeotCiO8uJ5BE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=lOlH7Qbn; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="lOlH7Qbn" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-2166651f752so132032865ad.3 for ; Mon, 30 Dec 2024 01:11:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549882; x=1736154682; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m/+0uIvp7xUeTeFwO63TH1y++vtASeDM7X35qnuktEM=; b=lOlH7QbnulqHPJxKnPcnJAimaOzW9lVegyhVigXYwqFVrY+r3iO2Ti4KIzlvNyLPFS Wxc0rvJqxfHTGptm/YggxR2zntj0C6fYziFFSzJ+PuExG4vo5iYlSniLVBTym5FlmzuO cMNWK2sMgu5BjgEkSYssqT6H1JaG9VBo4u8y0VXjQcvnMmko/6k0KDc20ndo3mheYmPF fzBC5uOZWMPvpoIt33sewRzqCWHMHjjVrWIDyJ4nRaFrdDarnAnLPminkC0oFb2dlBpY hE3Z0N8TEi8P0J4F3xzyj5tNSLj9tWJVzWQuWeQvryp5r7K+sAnjYJZ7WJnPGlnZA66O 8OeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549882; x=1736154682; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m/+0uIvp7xUeTeFwO63TH1y++vtASeDM7X35qnuktEM=; b=seKq9m5AynLvKlx38PuTESiSizLj572j99WopvKs+aaLZ4tW+s+W4SRzuWeZMbQsu6 c4kq1hwNNvzn7riudWUfgiprDXsAdOnzKlwFLNnffCnOw9SRd2UO1xovCK16UaWqae4L 48h4xprzhgWZhsuLh8SScajIpUwvxiFvyFfvAHiG6DcxexlwdDORrT6HfKJ/TYNRWBud C6oKZhvAYd75Md0iqaPwji6ZT3J9m082Njerf9fKzhaTSuqC8sh6ALjM/K63pKtXiEQp vMbxqvWSWRMk3xYeL34ltHNMbTC1RZQ7kGf56TLEz6bziCZPiGYVrSPqzMVm+PMdOVbD DyTQ== X-Forwarded-Encrypted: i=1; AJvYcCVoSuD2FeHOElmCBH7EqOLzPhoLCti6zyLbWHrh9b1H/+zaho3x2VH0cH0mJpkWCBvVUiA5v28hPQ==@vger.kernel.org X-Gm-Message-State: AOJu0YzMyC+Ea9oXpz07PSOFQbR/qXwi3NE9CCiTD7qel09I4dkuAC9Z nEo7iFGljZZvdNHRwxTvtnWQeCxHwAk4+Tly41yE2Adj6AcSQohlt95IDeNVCsQ= X-Gm-Gg: ASbGncvujqADd/vExhDvCsG7JN4JKHoqh28PSyaAaIlsNexPDFHfFLuTa6BJ9dQ/Cv9 OWrk02rxt2Cg44Q6H379zCXSM+1Mo8flW0N2yXwOVIAGa523Mv10qCCKOIwC0vh8yUJtGud2AYP VwXCDXYcjIK/sBL9FCgE6bDyLzsYcCBANW//u4VABdH+1Ad/aidKyyUicwzUHUU9Iikfutivfs1 uW/KLHSB9VGpFyDBJZOlSMOidYYd4psfIa24HcXdDCyssDhi/brycb9HiIBAoWRCRl28/VPTGzo /GiqqKjmEBb1rg96CiRfBg== X-Google-Smtp-Source: AGHT+IH93LLg/oVZ0Av8lq2Z9OPVl8eXrLxQFXcb3ZPfLzaeZOV1YT5zFyKOR468gBW6qJHDbn+qAQ== X-Received: by 2002:a17:903:947:b0:216:48f4:4f3d with SMTP id d9443c01a7336-219e6e9e004mr571689445ad.13.1735549882158; Mon, 30 Dec 2024 01:11:22 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.11.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:11:21 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 13/15] mm: pgtable: introduce generic __tlb_remove_table() Date: Mon, 30 Dec 2024 17:07:48 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Several architectures (arm, arm64, riscv and x86) define exactly the same __tlb_remove_table(), just introduce generic __tlb_remove_table() to eliminate these duplications. The s390 __tlb_remove_table() is nearly the same, so also make s390 __tlb_remove_table() version generic. Signed-off-by: Qi Zheng --- arch/arm/include/asm/tlb.h | 9 --------- arch/arm64/include/asm/tlb.h | 7 ------- arch/powerpc/include/asm/tlb.h | 1 + arch/riscv/include/asm/tlb.h | 12 ------------ arch/s390/include/asm/tlb.h | 9 ++++----- arch/s390/mm/pgalloc.c | 7 ------- arch/sparc/include/asm/tlb_32.h | 1 + arch/sparc/include/asm/tlb_64.h | 1 + arch/x86/include/asm/tlb.h | 17 ----------------- include/asm-generic/tlb.h | 15 +++++++++++++-- 10 files changed, 20 insertions(+), 59 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 264ab635e807a..ea4fbe7b17f6f 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -27,15 +27,6 @@ #else /* !CONFIG_MMU */ #include - -static inline void __tlb_remove_table(void *_table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)_table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - #include static inline void diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 93591a80b5bfb..8d762607285cc 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -10,13 +10,6 @@ #include -static inline void __tlb_remove_table(void *_table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)_table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} #define tlb_flush tlb_flush static void tlb_flush(struct mmu_gather *tlb); diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index 1ca7d4c4b90db..2058e8d3e0138 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -37,6 +37,7 @@ extern void tlb_flush(struct mmu_gather *tlb); */ #define tlb_needs_table_invalidate() radix_enabled() +#define __HAVE_ARCH_TLB_REMOVE_TABLE /* Get the generic bits... */ #include diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index ded8724b3c4f7..50b63b5c15bd8 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -10,18 +10,6 @@ struct mmu_gather; static void tlb_flush(struct mmu_gather *tlb); -#ifdef CONFIG_MMU - -static inline void __tlb_remove_table(void *table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - -#endif /* CONFIG_MMU */ - #define tlb_flush tlb_flush #include diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 79df7c0932c56..da4a7d175f69c 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -22,7 +22,6 @@ * Pages used for the page tables is a different story. FIXME: more */ -void __tlb_remove_table(void *_table); static inline void tlb_flush(struct mmu_gather *tlb); static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, bool delay_rmap, int page_size); @@ -87,7 +86,7 @@ static inline void pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, tlb->cleared_pmds = 1; if (mm_alloc_pgste(tlb->mm)) gmap_unlink(tlb->mm, (unsigned long *)pte, address); - tlb_remove_ptdesc(tlb, pte); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pte)); } /* @@ -106,7 +105,7 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_ptdesc(tlb, pmd); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pmd)); } /* @@ -124,7 +123,7 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; - tlb_remove_ptdesc(tlb, pud); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pud)); } /* @@ -142,7 +141,7 @@ static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; - tlb_remove_ptdesc(tlb, p4d); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); } #endif /* _S390_TLB_H */ diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index c73b89811a264..3e002dea6278f 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -193,13 +193,6 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) pagetable_dtor_free(ptdesc); } -void __tlb_remove_table(void *table) -{ - struct ptdesc *ptdesc = virt_to_ptdesc(table); - - pagetable_dtor_free(ptdesc); -} - #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void pte_free_now(struct rcu_head *head) { diff --git a/arch/sparc/include/asm/tlb_32.h b/arch/sparc/include/asm/tlb_32.h index 5cd28a8793e39..910254867dfbd 100644 --- a/arch/sparc/include/asm/tlb_32.h +++ b/arch/sparc/include/asm/tlb_32.h @@ -2,6 +2,7 @@ #ifndef _SPARC_TLB_H #define _SPARC_TLB_H +#define __HAVE_ARCH_TLB_REMOVE_TABLE #include #endif /* _SPARC_TLB_H */ diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index 3037187482db7..1a6e694418e39 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -33,6 +33,7 @@ void flush_tlb_pending(void); #define tlb_needs_table_invalidate() (false) #endif +#define __HAVE_ARCH_TLB_REMOVE_TABLE #include #endif /* _SPARC64_TLB_H */ diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index f64730be5ad67..3858dbf75880e 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -20,23 +20,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables); } -/* - * While x86 architecture in general requires an IPI to perform TLB - * shootdown, enablement code for several hypervisors overrides - * .flush_tlb_others hook in pv_mmu_ops and implements it by issuing - * a hypercall. To keep software pagetable walkers safe in this case we - * switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the comment - * below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h - * for more details. - */ -static inline void __tlb_remove_table(void *table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - #ifdef CONFIG_PT_RECLAIM static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) { diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 709830274b756..69de47c7ef3c5 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -153,8 +153,9 @@ * * Useful if your architecture has non-page page directories. * - * When used, an architecture is expected to provide __tlb_remove_table() - * which does the actual freeing of these pages. + * When used, an architecture is expected to provide __tlb_remove_table() or + * use the generic __tlb_remove_table(), which does the actual freeing of these + * pages. * * MMU_GATHER_RCU_TABLE_FREE * @@ -207,6 +208,16 @@ struct mmu_table_batch { #define MAX_TABLE_BATCH \ ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) +#ifndef __HAVE_ARCH_TLB_REMOVE_TABLE +static inline void __tlb_remove_table(void *table) +{ + struct ptdesc *ptdesc = (struct ptdesc *)table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); +} +#endif + extern void tlb_remove_table(struct mmu_gather *tlb, void *table); #else /* !CONFIG_MMU_GATHER_HAVE_TABLE_FREE */ From patchwork Mon Dec 30 09:07:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923154 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFAD31A239B for ; Mon, 30 Dec 2024 09:11:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549900; cv=none; b=OsWqCFT43i/31qhuH+c0gx7FKNaELPKx6bLYImVK0kqblAKJnIPTYqdKcs+WtbIqvTcmLM1T4QxRDmq0Bi8Ef6iVFmrCRP0Z2qMV1nQCH293u+2LD9POE9eeu7w9zI1iWgFZ0QiYKaUNbB1TklfknLHUt2lgEVh7agYF41VaWPE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549900; c=relaxed/simple; bh=AOHzYz3e0W68bypaywk5PqwWiTbsh7X+wa84FHfFzTo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=bPQjKbvMvtJrHe8tHmCuAkh7qAh+CfU1VNRHipmnvqlgAvUJa0K68yQKoDe9u5lbOgrItGgZAoDTHUtliUHs5wZt1WhgQ9C3IR3uCmRRrIImeY/2VZhsrH+T5x+6xtIE87GciicGgzx86uXGdYjlgi0389cEjAzypr2yopG1AkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=dMGXvhJm; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="dMGXvhJm" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-21669fd5c7cso116693675ad.3 for ; Mon, 30 Dec 2024 01:11:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549894; x=1736154694; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VAJKsDdGNb+ALwQvMYs9Oz9Vq8VGrLJsA8GxB+kZmZc=; b=dMGXvhJmZw4f/XjeFPAOypYaJ1AeNxzuqtsV2p6mWczJPmfm9zdYnU3ZIXB8Y9vMeH ywM+JXATE4QRUnmfoMuK10N2mg9FnJkSay3ado5+OllS9iSY83jaERVs+Njr7nlkoIIp UygK4eILQ2UWIoPeQNeeWAlAY/Ya8/yMig+Z6V1ZJitEJKinQWfYiq+YSRG8izLJnDpD O7F2H8ddG4ikswjzAZ8Pfccf0A8fyI6BPEemW2TroujdF+t417cgbJriclhuXcz6lpgC EpOiL2JktxJkYb4K8498bnh+BfXaitQX0eJMMBMnmcuIQLF0Gh4Ci+zvCFyDYphl89fT 14NQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549894; x=1736154694; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VAJKsDdGNb+ALwQvMYs9Oz9Vq8VGrLJsA8GxB+kZmZc=; b=pxVWozHBziSb/3c8jIfNBnDhoF6FU7LvJoTNoKnw+L3TgCbQ1a7Vg1knQl8gKpeNrH icuLTGfWCD5exdQZVehDAKQ+YwBI9fQScnYJi19fY6aDjkWvrRapI0504g2SGEjHPN8t FLAoTZmJWPq1XPk+JAeDE8KrkKsKBPPvf2L3QZ56KxJo9p9sEwdoMKM/c7CApJZON9dS EmBE+VULmbuZ0M807j4F429LT1uHYUXROXG9SLaybGGC5IKBgFmFashJnxorHjyZK+6a sDcCkMFA0WX+S5+lBkZ0VJMS2aKyT6mwdGkn4iI0nw3RhbxTjpzCqbs4tG4NZTUwKNAB AU5A== X-Forwarded-Encrypted: i=1; AJvYcCXYtTWnYkDwjvwXQJT/5WFoNEhVCbRTnjlPDd7vyWo8rm08UtmG0i5OrbO+q6/8NT28SSY8nAJIQg==@vger.kernel.org X-Gm-Message-State: AOJu0YxW69Cmu6M0vEVAO6dd6ALOGt2UEeN72dTP8weUYkAK0iJEl6/D peWlVEHWlTjbFoHodPde3OAA1K5EFFyyeSKm5noN0rfTaDnDELEoxz0hL8wnIXM= X-Gm-Gg: ASbGncuSy6MPBNJZcxBxY4I7dW2n97vhU3+9WpC3RAPJSWCboejHKnWZZSY67W3EYoZ etjup1yKhxKmfPOBHeudt950Yp+sLipjYZhGF5DMWaAQGb9cJU6nu7k8KiBlcOCsN39OYIrRw6z l5KuWVwWbID9peq7+12ibfbH4VhKud4iS7E5AENJTNNDa5FWsfw3+gotWWr2LnFc5uEpgkH6Cog fdQ41FHyoD3N5bSHtb/z7D3ezdd+Bjp/qPv/msKdLkeKPAVdB8wWUQvHKGyWZqjvj9PXS6v5eSR r1NMKu7iVisRLIVD+ezwGg== X-Google-Smtp-Source: AGHT+IF2zawjY8HUYTCaMJHzWmoJ2TNFStaqczmOTvWfDCo9S9fZfm/YNlaFMm14SUWbTnX0T2oRFg== X-Received: by 2002:a05:6a00:600c:b0:728:b601:86ee with SMTP id d2e1a72fcca58-72abde82a17mr50548930b3a.16.1735549894520; Mon, 30 Dec 2024 01:11:34 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.11.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:11:34 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 14/15] mm: pgtable: move __tlb_remove_table_one() in x86 to generic file Date: Mon, 30 Dec 2024 17:07:49 +0800 Message-Id: <286e9777dd266dc610de20120fae453b84d3a868.1735549103.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The __tlb_remove_table_one() in x86 does not contain architecture-specific content, so move it to the generic file. Signed-off-by: Qi Zheng --- arch/x86/include/asm/tlb.h | 19 ------------------- mm/mmu_gather.c | 20 ++++++++++++++++++-- 2 files changed, 18 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 3858dbf75880e..77f52bc1578a7 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -20,25 +20,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables); } -#ifdef CONFIG_PT_RECLAIM -static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) -{ - struct ptdesc *ptdesc; - - ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - __tlb_remove_table(ptdesc); -} - -static inline void __tlb_remove_table_one(void *table) -{ - struct ptdesc *ptdesc; - - ptdesc = table; - call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu); -} -#define __tlb_remove_table_one __tlb_remove_table_one -#endif /* CONFIG_PT_RECLAIM */ - static inline void invlpg(unsigned long addr) { asm volatile("invlpg (%0)" ::"r" (addr) : "memory"); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 1e21022bcf339..7aa6f18c500b2 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -311,13 +311,29 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb) } } -#ifndef __tlb_remove_table_one +#ifdef CONFIG_PT_RECLAIM +static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) +{ + struct ptdesc *ptdesc; + + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + __tlb_remove_table(ptdesc); +} + +static inline void __tlb_remove_table_one(void *table) +{ + struct ptdesc *ptdesc; + + ptdesc = table; + call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu); +} +#else static inline void __tlb_remove_table_one(void *table) { tlb_remove_table_sync_one(); __tlb_remove_table(table); } -#endif +#endif /* CONFIG_PT_RECLAIM */ static void tlb_remove_table_one(void *table) { From patchwork Mon Dec 30 09:07:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13923155 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73BC61A38F9 for ; Mon, 30 Dec 2024 09:11:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549910; cv=none; b=srQl3EYOsaqy8eFpzzIliHaOv0Mucmqf3JaNRVPv86N4K7ZLeINf9F2F9+LEYnTkz9I+05I+JJJ4fSOPJL4OxwX9SYXXwRct3/kF6n/DbchXU8SpWFUqDBq7Zt0OQnDyfwDKrmiS/xDAO/zyu1r9osW1+40uROKoU7n3ZotRMbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735549910; c=relaxed/simple; bh=q1hWJFvFfNeFAOErQnfZb6jq/jCJy3LyoYOS1Q4McDs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=X61mnaITYlMA5bzZMZ3nufjrx3m8V+R/vamBduAzkSxJXkVX+P1dmvafiqoAAQyE3yPHXO2YjWxOR3Hq1AYwQqVb1VZJSri/+zbf0AYrG6oQ0hbrtAoVOviGIJePebUh/HcDvhcoJfX6ttnRi1NI8qyPLYBwJLzGS5spTc4DefM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=DRjaqOGl; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="DRjaqOGl" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-21644aca3a0so17758845ad.3 for ; Mon, 30 Dec 2024 01:11:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735549907; x=1736154707; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Uxx1VFA8X6F1Vml9+szJuo78n9B3sJITH2LTRfnVDm8=; b=DRjaqOGlV6hw64zv26N1z30EDiOrI+YnLf0gw9T+tGPQAu5Wac7dniLrGPK9VAmnEN +DvJKvoYVfSVQvQ2kRF/gH+UTGLCXne/2W/0H1hylVgjSn0yAcztGV4DoF+2oNHOeNtZ msarIzKohC1hFB/sJ6GPvkOeZ1NpIHP9W4Nvee3wPszY4rf4iUuuFgzacyuY1JlUuIsC vNpQ3kER45NRMuUR/eUci9rlozmPgxGmh4DBofm7U3DuxsTcCcJzKoPFm/GYTMnEBi1N Zmm0uItcdvQ/QisD6womS9SM91vOWA73OpXLN5/PsRwq3Ra/inu8EqoyKMSRW3NlEQ72 y3YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735549907; x=1736154707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Uxx1VFA8X6F1Vml9+szJuo78n9B3sJITH2LTRfnVDm8=; b=vnebiOYABxZZz+7ROVhajOYpfNIyrcFU3MRaGQbLqzKsdtctIbNsMAfERT7HViyfSF b+eef7NulZQuzM9I40p35FR6ZGxyHxjVDXlq2D5baWyqp3KLKkJwSGoo0tANsqCbvkW0 +rprz9SrYIfkYSgtTRp2rfWkp53KgWpvHQica+U480Jd7IEh4jr/1MY7Dry0GGsqDNXC LErTYIw6oUV3+P8FUoCYRo2acc0i3RU35B3y/9gOs63X6p+5ArBDtBvS8+HexADEQY2A y+5ZgvKBrrRTAWcQJRnGrpUUrMb+YtRqkK7DlBzpRk/nhqVcE+86dqGDjO+/5An3z0YG NasA== X-Forwarded-Encrypted: i=1; AJvYcCV5KNoA7ML5jlI3anPzoz46d573P0Ge9/LUVVLRckcz2k4RCfxXmIE5Je2MA9kmra2ZBLwApUiHsA==@vger.kernel.org X-Gm-Message-State: AOJu0Yyn8XmiL4soMFygeuHIp3vSs4xd2JrVFSfT8shEGPHJOZqSO79w 30+Ctt7xRfDvC4b9S6AUgBZSIK1Bnk5qXv/oIMkW4nQaAW/5b7sthBDFAEvVyTg= X-Gm-Gg: ASbGncuIvWdDFQwAihY9rMwqN+cIp86UiAmCHMQPFgxo5s6hjS+q4KoPWi9+YSTPaig t40RMPKzdnKmXlN56N73wRQNZxSvfgEEcxRqZajACD7pXXokjG+8MapHedL/31YgRcjOgc8xZ81 SY5G3ff+wSTMgSNOhgaBnpuAhP0Nk8e2CS//iKY4wFO1NyOVW2VG1QDwemN09MSXIO54YC5EnOT P9xZoWIzGRApyU3xSQS7jWlNAdgBnyGgiMFzz6qstjI/sPr0Ex9uhT+5e0pWUS6gwW2PkPuzwPR 5u1BfMxlZOahDJ2fc3uazg== X-Google-Smtp-Source: AGHT+IH6enLQBRxhPiyC+b4rbFoH6ZPXIb+9j2XLkfNoZdfNkkGtY/WgMqtF7HfldrPWQopcc9iyNw== X-Received: by 2002:a05:6a20:4308:b0:1e0:c8d9:3382 with SMTP id adf61e73a8af0-1e5e0847084mr55644198637.45.1735549906952; Mon, 30 Dec 2024 01:11:46 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-842aba72f7csm17057841a12.4.2024.12.30.01.11.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Dec 2024 01:11:46 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, palmer@dabbelt.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v4 15/15] mm: pgtable: introduce generic pagetable_dtor_free() Date: Mon, 30 Dec 2024 17:07:50 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The pte_free(), pmd_free(), __pud_free() and __p4d_free() in asm-generic/pgalloc.h and the generic __tlb_remove_table() are basically the same, so let's introduce pagetable_dtor_free() to deduplicate them. In addition, the pagetable_dtor_free() in s390 does the same thing, so let's s390 also calls generic pagetable_dtor_free(). Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) --- arch/s390/mm/pgalloc.c | 6 ------ include/asm-generic/pgalloc.h | 12 ++++-------- include/asm-generic/tlb.h | 3 +-- include/linux/mm.h | 6 ++++++ 4 files changed, 11 insertions(+), 16 deletions(-) diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 3e002dea6278f..a4e7619020931 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -180,12 +180,6 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } -static void pagetable_dtor_free(struct ptdesc *ptdesc) -{ - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - void page_table_free(struct mm_struct *mm, unsigned long *table) { struct ptdesc *ptdesc = virt_to_ptdesc(table); diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 4afb346eae255..e3977ddca15e4 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -109,8 +109,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { struct ptdesc *ptdesc = page_ptdesc(pte_page); - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(ptdesc); } @@ -153,8 +152,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) struct ptdesc *ptdesc = virt_to_ptdesc(pmd); BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(ptdesc); } #endif @@ -202,8 +200,7 @@ static inline void __pud_free(struct mm_struct *mm, pud_t *pud) struct ptdesc *ptdesc = virt_to_ptdesc(pud); BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(ptdesc); } #ifndef __HAVE_ARCH_PUD_FREE @@ -248,8 +245,7 @@ static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) struct ptdesc *ptdesc = virt_to_ptdesc(p4d); BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(ptdesc); } #ifndef __HAVE_ARCH_P4D_FREE diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 69de47c7ef3c5..a96d4b440f3da 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -213,8 +213,7 @@ static inline void __tlb_remove_table(void *table) { struct ptdesc *ptdesc = (struct ptdesc *)table; - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(ptdesc); } #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index cad11fa10c192..94078c488e904 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3001,6 +3001,12 @@ static inline void pagetable_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } +static inline void pagetable_dtor_free(struct ptdesc *ptdesc) +{ + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); +} + static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc);