From patchwork Fri Aug 16 07:22:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13765556 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04FE08563E; Fri, 16 Aug 2024 07:23:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723793014; cv=none; b=qtI/+O2OTBWF7OyUznouUE27bYOQodpAFjobngHEOdcO1+YYvOsUP+khw86tAMrpJdGQqHCpE4aMGm47aMQ8vPbDpSRGBW0o5clOzCHJLONvDd6rSI6nc5XaNZidhk705835u1gf+QeOD0OvqJkJOazWTdgghWkPlVFygQMNRBk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723793014; c=relaxed/simple; bh=DDI4FoxxC/L6I2pp5Na005FZ5DQEP3zEoTs3qEaOV2g=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mZZDYyyZLYpuY5YYvI2U80Gv5bBOfZZ9irlQdqDVYS5lnEEJmFsNSv+8aDHD/ojcQ9ydx8THmOsLWWhbF9xlgtLEEynZSje/UkfOR1ZVfSOrO/N3VXSlcNNecKceVsYRbfWIF04Aq7SdiFjlO3ZT8eogQOXGb3LZSAY01Cu1r+s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KnmgvZmH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KnmgvZmH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97268C4AF0F; Fri, 16 Aug 2024 07:23:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723793013; bh=DDI4FoxxC/L6I2pp5Na005FZ5DQEP3zEoTs3qEaOV2g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KnmgvZmHvzvi4epDDLTeHET0S5iAYpU8fcMXZHlFOJYMz/j0m5R46yKtyXbEOwg23 vUXm3NhYG7IidlMnhqabPf0P1m5RcvkohOGt5O69wF9pQA20hSoHGCotuB2kHhS9Ad sPMB2TqxZmAeWUKPXobReBgXe4YIYdywSdavRohNxJckwSdDH99PCP8RPflypWT+U/ /QT+hggpJTMe+hD/Slmhrt8+rhy/Fu/mkgX3kKHTfXYxvEfvrKlOoJ21IjqKvCGKQg mF254sGT5CwyxQz519Pjtxmx3jNT8U9dXueYI0PpieBg/yc4zSjg5rkYewNq6qVWeF fMacw0fK031eA== From: neeraj.upadhyay@kernel.org To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, paulmck@kernel.org, neeraj.upadhyay@kernel.org, neeraj.upadhyay@amd.com, boqun.feng@gmail.com, joel@joelfernandes.org, urezki@gmail.com, frederic@kernel.org Subject: [PATCH rcu 1/3] rcu/kfree: Warn on unexpected tail state Date: Fri, 16 Aug 2024 12:52:52 +0530 Message-Id: <20240816072254.65928-1-neeraj.upadhyay@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240816072210.GA65644@neeraj.linux> References: <20240816072210.GA65644@neeraj.linux> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Paul E. McKenney" Within the rcu_sr_normal_gp_cleanup_work() function, there is an acquire load from rcu_state.srs_done_tail, which is expected to be non-NULL. This commit adds a WARN_ON_ONCE() to check this expectation. Signed-off-by: Paul E. McKenney Signed-off-by: Neeraj Upadhyay --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index e641cc681901..0f41a81138dc 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1649,7 +1649,7 @@ static void rcu_sr_normal_gp_cleanup_work(struct work_struct *work) * the done tail list manipulations are protected here. */ done = smp_load_acquire(&rcu_state.srs_done_tail); - if (!done) + if (WARN_ON_ONCE(!done)) return; WARN_ON_ONCE(!rcu_sr_is_wait_head(done)); From patchwork Fri Aug 16 07:22:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13765569 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9758E127E3A; Fri, 16 Aug 2024 07:23:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723793025; cv=none; b=AMMytqp3H5B8veK4rTdCUuNG0BbPposX4n7lfAmUTpZQTc/yTpCMkiTR0iTPCZKpPHGJo2AKf9ozOVpMQ47iuunAU8KARQEr1LRgGtXK2LS071vZktWUF4skVCKfPCYHILdrzK2/leq8sZscdoYmkHW3lvFJFK1JOVec+R3oxOk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723793025; c=relaxed/simple; bh=LqexMic2yVp5BioIIZgMNmhAdgDiQiQtzu/NQNrzTXU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gwdTQcJhvjgfI/fCGs+vRGb8g+sH483ufpq1P9zjpOEwie61p7l8u0vvjhrpzKYEo7r0JCndyyw0smM4SjNBb7kYbFrfMsDpFqGK8uw1xgaynDrF0edCH+1EPb6YWGDzars8K/HOtt98wcP8ng4SGp1DRWGKWEN6H8vK9qKoWWo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qTyPOf94; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qTyPOf94" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CD1AC32782; Fri, 16 Aug 2024 07:23:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723793025; bh=LqexMic2yVp5BioIIZgMNmhAdgDiQiQtzu/NQNrzTXU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qTyPOf94W0SHmYvgiGUGbXe+2iX/ylJVSx+2Ob2pVzPae5Wpn9w8wIvVzf89/d46O mxOF3MiBViebiiOv43+e+GAB7D0TgrdcagiD0/r5ucx1fsSKiaYVAjEl7bSlNJZjDz IGhwmbWocD1YpRTawHxiBAA5RCH9TrPjfhn7sutOtXNdIDl6t1I7RgteSUeG8YWmU4 l3DoBfV/DRNOuJiEYFUtLk+5Zkxtd+VsOWaThQrmwKHPuAm6KM37i98fMiRiJxJxDQ YrLpfd57CIug+FcaRWhnVv0JUVfnZl36J0CO7Dmk4eyvzV9Mk2IboeEobOez9bUloP 9mm+XjP0a/81w== From: neeraj.upadhyay@kernel.org To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, paulmck@kernel.org, neeraj.upadhyay@kernel.org, neeraj.upadhyay@amd.com, boqun.feng@gmail.com, joel@joelfernandes.org, urezki@gmail.com, frederic@kernel.org Subject: [PATCH rcu 2/3] rcu: Better define "atomic" for list replacement Date: Fri, 16 Aug 2024 12:52:53 +0530 Message-Id: <20240816072254.65928-2-neeraj.upadhyay@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240816072210.GA65644@neeraj.linux> References: <20240816072210.GA65644@neeraj.linux> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Paul E. McKenney" The kernel-doc headers for list_replace_rcu() and hlist_replace_rcu() claim that the replacement is atomic, which it is, but only for readers. Avoid confusion by making it clear that the atomic nature of these functions applies only to readers, not to concurrent updaters. Signed-off-by: Paul E. McKenney Signed-off-by: Neeraj Upadhyay --- include/linux/rculist.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/rculist.h b/include/linux/rculist.h index 3dc1e58865f7..14dfa6008467 100644 --- a/include/linux/rculist.h +++ b/include/linux/rculist.h @@ -191,7 +191,10 @@ static inline void hlist_del_init_rcu(struct hlist_node *n) * @old : the element to be replaced * @new : the new element to insert * - * The @old entry will be replaced with the @new entry atomically. + * The @old entry will be replaced with the @new entry atomically from + * the perspective of concurrent readers. It is the caller's responsibility + * to synchronize with concurrent updaters, if any. + * * Note: @old should not be empty. */ static inline void list_replace_rcu(struct list_head *old, @@ -519,7 +522,9 @@ static inline void hlist_del_rcu(struct hlist_node *n) * @old : the element to be replaced * @new : the new element to insert * - * The @old entry will be replaced with the @new entry atomically. + * The @old entry will be replaced with the @new entry atomically from + * the perspective of concurrent readers. It is the caller's responsibility + * to synchronize with concurrent updaters, if any. */ static inline void hlist_replace_rcu(struct hlist_node *old, struct hlist_node *new) From patchwork Fri Aug 16 07:22:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13765570 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DDBB12D758; Fri, 16 Aug 2024 07:23:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723793036; cv=none; b=NL8qopQF7nirYSV9zV+TkjuxNdAa2PYGTnixdb6mxSr8iLv/oHVUi859tpv7vFdidewakjxrTvk2jY23RCnOwDfEJymLZebYGJHO0WBFTqJGBlLsLvC8AMnK+bnUKF/ce02rxgNTb1TYwDrPujteoRopN6MBrX7MSUoHvfXIOVw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723793036; c=relaxed/simple; bh=eGmgAaZh6wSDvQDRjeeZIys0lYofLe7O5SLfEHYjDpk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=omQ9lc9TYaI/BSjyh4kcSijsZkEi+0k5YirY0dgjIHxrWQmanfPPYvt8Ttefm9frtkYkZxREL1U40qJq/KWp5s0tDkTEy63cJ5nvu4oqXjz54QwL8/vRpXV8QiSgHu9kW1Yd5E4S/8UnLWTaGjeSYDHMNnzRgBkU69O5LlUD86M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=d0XzHVcx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="d0XzHVcx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1211EC32782; Fri, 16 Aug 2024 07:23:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723793035; bh=eGmgAaZh6wSDvQDRjeeZIys0lYofLe7O5SLfEHYjDpk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d0XzHVcx8eso+SzlLhod7g/ZK8KJGMOJvNiR8rZI+vm7vAqrL0IgFSnddRJenK5NH UKZJC10sYLpY0kfForJv43RFjcNuZ8QafXOe+RCrTuDD3/E/QDieg9OuF3mv/vSVLt wUcPeMtMmRz7Oxo4V+Buiq1WbYX93maqUWajlDzNJ1vnh0r/eGbo4plM6YJU15Gg+n rhvLnJMVy9CsZ5xHCQb3rgCWkOlwgJMf/rjONOP9tPlvYiorQml1O+2V8JSIYWCy5x fz/rZGG6LLv8HS7F2MvdTerMdEod8ErJhZ6odE1a0Sms/qgHmseO279KZ9ZmTFUIhA ZLgF3nndg7YrQ== From: neeraj.upadhyay@kernel.org To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, paulmck@kernel.org, neeraj.upadhyay@kernel.org, neeraj.upadhyay@amd.com, boqun.feng@gmail.com, joel@joelfernandes.org, urezki@gmail.com, frederic@kernel.org, Thorsten Blum , "Gustavo A. R. Silva" Subject: [PATCH rcu 3/3] rcu: Annotate struct kvfree_rcu_bulk_data with __counted_by() Date: Fri, 16 Aug 2024 12:52:54 +0530 Message-Id: <20240816072254.65928-3-neeraj.upadhyay@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240816072210.GA65644@neeraj.linux> References: <20240816072210.GA65644@neeraj.linux> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Thorsten Blum Add the __counted_by compiler attribute to the flexible array member records to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and CONFIG_FORTIFY_SOURCE. Increment nr_records before adding a new pointer to the records array. Signed-off-by: Thorsten Blum Reviewed-by: "Gustavo A. R. Silva" Reviewed-by: "Uladzislau Rezki (Sony)" Reviewed-by: Paul E. McKenney Signed-off-by: Neeraj Upadhyay --- kernel/rcu/tree.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 0f41a81138dc..d5bf824159da 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3227,7 +3227,7 @@ struct kvfree_rcu_bulk_data { struct list_head list; struct rcu_gp_oldstate gp_snap; unsigned long nr_records; - void *records[]; + void *records[] __counted_by(nr_records); }; /* @@ -3767,7 +3767,8 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, } // Finally insert and update the GP for this page. - bnode->records[bnode->nr_records++] = ptr; + bnode->nr_records++; + bnode->records[bnode->nr_records - 1] = ptr; get_state_synchronize_rcu_full(&bnode->gp_snap); atomic_inc(&(*krcp)->bulk_count[idx]);