From patchwork Fri May 3 22:31:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuniyuki Iwashima X-Patchwork-Id: 13653606 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp-fw-52002.amazon.com (smtp-fw-52002.amazon.com [52.119.213.150]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 997C5290F for ; Fri, 3 May 2024 22:32:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.150 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775551; cv=none; b=QvoJfggkSgN+cC27+y19K7QODVlWJzndsmShxKdcg2tsBwRMcLqZiLmuM2pfAnjze218SSPTJEvp7xcDjNxlYZ9pEeD7C7Vfb53VDC2XLlWmMxmO06e+4h20mSgNpfTC+bWBTK5Mp29Ceu64eDzGyv78yDik6Y3xYoaDssV69Y8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775551; c=relaxed/simple; bh=68jEzkxLIir83oGSB9RS/2TTjim1XB6frA/leiPlURQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=F74J/7g1c5POdNsUTaSNAp4A85LheibvlbCzM7sqlQQX602UZ2sfXKW5nEI7WqDHoPurjI2jV5V+K6rPe6ag6M6JTMNbzymS/MVxTreh24iucqwpLPR95oRorDLjhyQiVGyLrvIyL0KSeIHul9HZnnzeT9GCeWV6on48pcBUZGs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.jp; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=BwB1aI5y; arc=none smtp.client-ip=52.119.213.150 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.jp Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="BwB1aI5y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1714775550; x=1746311550; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3Bdg3MzvrsoNG9rOHa0+yM6/qrkHrO8W/nDfkAcnnzk=; b=BwB1aI5ySCaKRu4RQO/CbcOGPVj5ALz4x6n0HwNZG03qMjcEBcPFgA8H kAoehUPOGA2xOhAvEJd5NsP3rEyQ/f9BYxPTfj62SX7XLHwxW/5dzZAGo nsO9aDeuBOT44W1i1DlZz/wMFPToESlmTf5W7TaRZYU5tCBFEzOUm2MnC 0=; X-IronPort-AV: E=Sophos;i="6.07,252,1708387200"; d="scan'208";a="630825643" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52002.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 May 2024 22:32:27 +0000 Received: from EX19MTAUWB001.ant.amazon.com [10.0.38.20:39401] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.23.7:2525] with esmtp (Farcaster) id f505cff7-f9c1-44ab-b3b7-9af5e92a6fc0; Fri, 3 May 2024 22:32:26 +0000 (UTC) X-Farcaster-Flow-ID: f505cff7-f9c1-44ab-b3b7-9af5e92a6fc0 Received: from EX19D004ANA001.ant.amazon.com (10.37.240.138) by EX19MTAUWB001.ant.amazon.com (10.250.64.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:32:26 +0000 Received: from 88665a182662.ant.amazon.com (10.187.170.24) by EX19D004ANA001.ant.amazon.com (10.37.240.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.28; Fri, 3 May 2024 22:32:23 +0000 From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni CC: Kuniyuki Iwashima , Kuniyuki Iwashima , Subject: [PATCH v1 net-next 1/6] af_unix: Add dead flag to struct scm_fp_list. Date: Fri, 3 May 2024 15:31:45 -0700 Message-ID: <20240503223150.6035-2-kuniyu@amazon.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240503223150.6035-1-kuniyu@amazon.com> References: <20240503223150.6035-1-kuniyu@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D045UWA002.ant.amazon.com (10.13.139.12) To EX19D004ANA001.ant.amazon.com (10.37.240.138) X-Patchwork-Delegate: kuba@kernel.org Commit 1af2dface5d2 ("af_unix: Don't access successor in unix_del_edges() during GC.") fixed use-after-free by avoid accessing edge->successor while GC is in progress. However, there could be a small race window where another process could call unix_del_edges() while gc_in_progress is true and __skb_queue_purge() is on the way. So, we cannot rely on gc_in_progress and need another marker per struct scm_fp_list which indicates if the skb is garbage-collected. This patch adds dead flag in struct scm_fp_list and set it true before calling __skb_queue_purge() in __unix_gc(). Fixes: 1af2dface5d2 ("af_unix: Don't access successor in unix_del_edges() during GC.") Signed-off-by: Kuniyuki Iwashima --- include/net/scm.h | 1 + net/core/scm.c | 1 + net/unix/garbage.c | 14 ++++++++++---- 3 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/net/scm.h b/include/net/scm.h index bbc5527809d1..0d35c7c77a74 100644 --- a/include/net/scm.h +++ b/include/net/scm.h @@ -33,6 +33,7 @@ struct scm_fp_list { short max; #ifdef CONFIG_UNIX bool inflight; + bool dead; struct list_head vertices; struct unix_edge *edges; #endif diff --git a/net/core/scm.c b/net/core/scm.c index 5763f3320358..4f6a14babe5a 100644 --- a/net/core/scm.c +++ b/net/core/scm.c @@ -91,6 +91,7 @@ static int scm_fp_copy(struct cmsghdr *cmsg, struct scm_fp_list **fplp) fpl->user = NULL; #if IS_ENABLED(CONFIG_UNIX) fpl->inflight = false; + fpl->dead = false; fpl->edges = NULL; INIT_LIST_HEAD(&fpl->vertices); #endif diff --git a/net/unix/garbage.c b/net/unix/garbage.c index d76450133e4f..1f8b8cdfcdc8 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -158,13 +158,11 @@ static void unix_add_edge(struct scm_fp_list *fpl, struct unix_edge *edge) unix_update_graph(unix_edge_successor(edge)); } -static bool gc_in_progress; - static void unix_del_edge(struct scm_fp_list *fpl, struct unix_edge *edge) { struct unix_vertex *vertex = edge->predecessor->vertex; - if (!gc_in_progress) + if (!fpl->dead) unix_update_graph(unix_edge_successor(edge)); list_del(&edge->vertex_entry); @@ -240,7 +238,7 @@ void unix_del_edges(struct scm_fp_list *fpl) unix_del_edge(fpl, edge); } while (i < fpl->count_unix); - if (!gc_in_progress) { + if (!fpl->dead) { receiver = fpl->edges[0].successor; receiver->scm_stat.nr_unix_fds -= fpl->count_unix; } @@ -559,9 +557,12 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist) list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); } +static bool gc_in_progress; + static void __unix_gc(struct work_struct *work) { struct sk_buff_head hitlist; + struct sk_buff *skb; spin_lock(&unix_gc_lock); @@ -579,6 +580,11 @@ static void __unix_gc(struct work_struct *work) spin_unlock(&unix_gc_lock); + skb_queue_walk(&hitlist, skb) { + if (UNIXCB(skb).fp) + UNIXCB(skb).fp->dead = true; + } + __skb_queue_purge(&hitlist); skip_gc: WRITE_ONCE(gc_in_progress, false); From patchwork Fri May 3 22:31:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuniyuki Iwashima X-Patchwork-Id: 13653607 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31068137749 for ; Fri, 3 May 2024 22:32:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=207.171.184.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775578; cv=none; b=u9sIxsR+ITPd7SZsqTEM0jUaf3HFr9lxX6n22mXW5jsVYd4k+Yrf7OzgL62XamoNA+cuRKhb0eOhYN1XYGh5oq8/Qh7Cnc5z7JfWI4WF5MCZ7zUpG57UiwLYIvbj3v6xZxrKu0y2/BNTVTxxQSzfbFYKZb3d/GzNoBxdXr8ynZs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775578; c=relaxed/simple; bh=kFAiNMew9eE5IO6wYhpGJqrCtW1OtcqF03jNP9Hkonc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uWqVcErWrUqNbtumu8GMUZXf6ITH/2VxbgN5BwsoJrbYrDkXO/b3oHATWtd++Pjvf+nYibC/I9SHEcZCrtxHT51zThpVb96JIMTIv4n2xu/tv0n5Gf0wAfJ8Mry67y12CmKuOkm5ufEU5sm6bCONSG7lKGsXy+o2W03VSD52txk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.jp; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=r2CaK2gF; arc=none smtp.client-ip=207.171.184.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.jp Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="r2CaK2gF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1714775577; x=1746311577; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9PXod9CnmezhySErZOmJWIDiqtplhQZNHyrxRmDfD3I=; b=r2CaK2gFjnTRsEc75ZyS21FjgB5P5M/NNQgi8o7DXlKGEmu7Vb8FcX4R m4UEYF24XnFHVNK1Wg/bM+FBBykI02ligvWcdLLBDJWuLGKTzfONOSCOh 8qqDw7t5+O44QYeXaN0+lr7eC5ReJen6AnMAa98gW1qnmJgz2W46wo1fb E=; X-IronPort-AV: E=Sophos;i="6.07,252,1708387200"; d="scan'208";a="416766960" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.214]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 May 2024 22:32:50 +0000 Received: from EX19MTAUWC001.ant.amazon.com [10.0.38.20:9422] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.58.142:2525] with esmtp (Farcaster) id 6e063397-e104-4e88-8933-7dbc2a2b14bc; Fri, 3 May 2024 22:32:50 +0000 (UTC) X-Farcaster-Flow-ID: 6e063397-e104-4e88-8933-7dbc2a2b14bc Received: from EX19D004ANA001.ant.amazon.com (10.37.240.138) by EX19MTAUWC001.ant.amazon.com (10.250.64.174) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:32:50 +0000 Received: from 88665a182662.ant.amazon.com (10.187.170.24) by EX19D004ANA001.ant.amazon.com (10.37.240.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.28; Fri, 3 May 2024 22:32:47 +0000 From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni CC: Kuniyuki Iwashima , Kuniyuki Iwashima , Subject: [PATCH v1 net-next 2/6] af_unix: Save the number of loops in inflight graph. Date: Fri, 3 May 2024 15:31:46 -0700 Message-ID: <20240503223150.6035-3-kuniyu@amazon.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240503223150.6035-1-kuniyu@amazon.com> References: <20240503223150.6035-1-kuniyu@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D036UWC003.ant.amazon.com (10.13.139.214) To EX19D004ANA001.ant.amazon.com (10.37.240.138) X-Patchwork-Delegate: kuba@kernel.org unix_walk_scc_fast() calls unix_scc_cyclic() for every SCC so that we can make unix_graph_maybe_cyclic false when all SCC are cleaned up. If we count the number of loops in the graph during Tarjan's algorithm, we need not call unix_scc_cyclic() in unix_walk_scc_fast(). Instead, we can just decrement the number when calling unix_collect_skb() and update unix_graph_maybe_cyclic based on the count. Signed-off-by: Kuniyuki Iwashima --- net/unix/garbage.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 1f8b8cdfcdc8..7ffb80dd422c 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -405,6 +405,7 @@ static bool unix_scc_cyclic(struct list_head *scc) static LIST_HEAD(unix_visited_vertices); static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2; +static unsigned long unix_graph_circles; static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index, struct sk_buff_head *hitlist) @@ -494,8 +495,8 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde if (scc_dead) unix_collect_skb(&scc, hitlist); - else if (!unix_graph_maybe_cyclic) - unix_graph_maybe_cyclic = unix_scc_cyclic(&scc); + else if (unix_scc_cyclic(&scc)) + unix_graph_circles++; list_del(&scc); } @@ -509,7 +510,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist) { unsigned long last_index = UNIX_VERTEX_INDEX_START; - unix_graph_maybe_cyclic = false; + unix_graph_circles = 0; /* Visit every vertex exactly once. * __unix_walk_scc() moves visited vertices to unix_visited_vertices. @@ -524,13 +525,12 @@ static void unix_walk_scc(struct sk_buff_head *hitlist) list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); swap(unix_vertex_unvisited_index, unix_vertex_grouped_index); + unix_graph_maybe_cyclic = !!unix_graph_circles; unix_graph_grouped = true; } static void unix_walk_scc_fast(struct sk_buff_head *hitlist) { - unix_graph_maybe_cyclic = false; - while (!list_empty(&unix_unvisited_vertices)) { struct unix_vertex *vertex; struct list_head scc; @@ -546,15 +546,18 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist) scc_dead = unix_vertex_dead(vertex); } - if (scc_dead) + if (scc_dead) { unix_collect_skb(&scc, hitlist); - else if (!unix_graph_maybe_cyclic) - unix_graph_maybe_cyclic = unix_scc_cyclic(&scc); + unix_graph_circles--; + } list_del(&scc); } list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); + + if (!unix_graph_circles) + unix_graph_maybe_cyclic = false; } static bool gc_in_progress; From patchwork Fri May 3 22:31:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuniyuki Iwashima X-Patchwork-Id: 13653608 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 579FA7E56C for ; Fri, 3 May 2024 22:33:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=207.171.190.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775603; cv=none; b=ERL1ag+RhfV4oifb0o2dV9aqHxyXNM+NKme6JlF8ikC0FHzMOzAgmYFW3RcuDuMRoonm+N/9HoPgtELbfVNcacfhCGOpnxwcm2Qz7egpPMncJhWCkIq8TBVWipanlCMgS9PrrQT33UzD71cB+8OTx5B3leSRpZvwnF0Uv9Qe7TE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775603; c=relaxed/simple; bh=xa5cFyUIJcwK1Mk8lpIyCjDJyHppop1Eyv6yPwafLzg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=IZa86ODsv1fnDUMiSEtmv/Tk7+QltO79pe+JNPvpFKkVFrp2Z06nyRtzAZc5nFtUengNHKID4okP981E4h7+RGptKLZo3NuQ0tGdfY/zvapajTn91wNgbVzhjcKGyYvg4NwbFdx0Wq0CuIbplkyiGict7nAgKa/KzVtE/APOtOs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.jp; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=T/n3ECKf; arc=none smtp.client-ip=207.171.190.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.jp Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="T/n3ECKf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1714775602; x=1746311602; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TWPXT+dP4zPVPgasr2vqReZRsgaZLTNcDUim8So3alA=; b=T/n3ECKfNy9aGaH25znExardn4b4hE1hPMuQtaUyIWEmHpLOBMJp/tZy JlEqgAmgJ6w+hglSw2f8i/PzrERcwLEr7uV3X9+x3NEFn0S2hR3UguSwq qmlQDNnFEmHG0m2wQab9OnH2FIk8cZRKRM0UXp5hgSH7+eTigrbSJhTGk g=; X-IronPort-AV: E=Sophos;i="6.07,252,1708387200"; d="scan'208";a="342641954" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 May 2024 22:33:16 +0000 Received: from EX19MTAUWB002.ant.amazon.com [10.0.7.35:7653] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.34.73:2525] with esmtp (Farcaster) id 326e160e-d662-4497-b624-48d9cef98ef8; Fri, 3 May 2024 22:33:14 +0000 (UTC) X-Farcaster-Flow-ID: 326e160e-d662-4497-b624-48d9cef98ef8 Received: from EX19D004ANA001.ant.amazon.com (10.37.240.138) by EX19MTAUWB002.ant.amazon.com (10.250.64.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:33:14 +0000 Received: from 88665a182662.ant.amazon.com (10.187.170.24) by EX19D004ANA001.ant.amazon.com (10.37.240.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:33:11 +0000 From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni CC: Kuniyuki Iwashima , Kuniyuki Iwashima , Subject: [PATCH v1 net-next 3/6] af_unix: Manage inflight graph state as unix_graph_state. Date: Fri, 3 May 2024 15:31:47 -0700 Message-ID: <20240503223150.6035-4-kuniyu@amazon.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240503223150.6035-1-kuniyu@amazon.com> References: <20240503223150.6035-1-kuniyu@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D033UWA001.ant.amazon.com (10.13.139.103) To EX19D004ANA001.ant.amazon.com (10.37.240.138) X-Patchwork-Delegate: kuba@kernel.org The graph state is managed by two variables, unix_graph_maybe_cyclic and unix_graph_grouped. However, unix_graph_grouped is checked only when unix_graph_maybe_cyclic is true, so the graph state is actually tri-state. Let's merge the two variables into unix_graph_state. Signed-off-by: Kuniyuki Iwashima --- net/unix/garbage.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 7ffb80dd422c..478b2eb479a2 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -112,8 +112,13 @@ static struct unix_vertex *unix_edge_successor(struct unix_edge *edge) return edge->successor->vertex; } -static bool unix_graph_maybe_cyclic; -static bool unix_graph_grouped; +enum { + UNIX_GRAPH_NOT_CYCLIC, + UNIX_GRAPH_MAYBE_CYCLIC, + UNIX_GRAPH_CYCLIC, +}; + +static unsigned char unix_graph_state; static void unix_update_graph(struct unix_vertex *vertex) { @@ -123,8 +128,7 @@ static void unix_update_graph(struct unix_vertex *vertex) if (!vertex) return; - unix_graph_maybe_cyclic = true; - unix_graph_grouped = false; + unix_graph_state = UNIX_GRAPH_MAYBE_CYCLIC; } static LIST_HEAD(unix_unvisited_vertices); @@ -525,8 +529,7 @@ static void unix_walk_scc(struct sk_buff_head *hitlist) list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); swap(unix_vertex_unvisited_index, unix_vertex_grouped_index); - unix_graph_maybe_cyclic = !!unix_graph_circles; - unix_graph_grouped = true; + unix_graph_state = unix_graph_circles ? UNIX_GRAPH_CYCLIC : UNIX_GRAPH_NOT_CYCLIC; } static void unix_walk_scc_fast(struct sk_buff_head *hitlist) @@ -557,7 +560,7 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist) list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); if (!unix_graph_circles) - unix_graph_maybe_cyclic = false; + unix_graph_state = UNIX_GRAPH_NOT_CYCLIC; } static bool gc_in_progress; @@ -569,14 +572,14 @@ static void __unix_gc(struct work_struct *work) spin_lock(&unix_gc_lock); - if (!unix_graph_maybe_cyclic) { + if (unix_graph_state == UNIX_GRAPH_NOT_CYCLIC) { spin_unlock(&unix_gc_lock); goto skip_gc; } __skb_queue_head_init(&hitlist); - if (unix_graph_grouped) + if (unix_graph_state == UNIX_GRAPH_CYCLIC) unix_walk_scc_fast(&hitlist); else unix_walk_scc(&hitlist); From patchwork Fri May 3 22:31:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuniyuki Iwashima X-Patchwork-Id: 13653609 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp-fw-80009.amazon.com (smtp-fw-80009.amazon.com [99.78.197.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF8991E4B0 for ; Fri, 3 May 2024 22:33:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=99.78.197.220 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775627; cv=none; b=ESFUv8d9XtO0wEPENvrALNso68QQsvT/R4+0Dj8hFWYD/y5P7se4sxzOtwr3fBxj6vAs7JhDZMNvpkDQ6EMpUwEpIUIHxMQTF8vGUNbdu4Hg48HAOu9o8UKsoHAIXMy3NVkSPbRKK/lp3WiSh5TV/upj0eChBKUUGEEVxoYh1gg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775627; c=relaxed/simple; bh=K5OWe0bStvco8pgYhqZa36kByCsGBW343YCB95la5Nc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=un8OLZnOAzSOOLadzi5jrfxpbxXb404qe8F6HdsTzQ9eE0R4BlExEcXfkG8Bdh/ndLYuUsKX20j1Rk+CI/mLOMHIB803Q8kcibZifkdvDIPcNE/48gHMzt6avn1WmmlQW2HOTqcHH+3BQZ0JrG+KJuHoI6ll8bEITRUJOTeFPNw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.jp; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=MjHlmFxT; arc=none smtp.client-ip=99.78.197.220 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.jp Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="MjHlmFxT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1714775625; x=1746311625; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FZLNwY0fkJzz0vjQdt0irInetE8uhIRWHE7DhB5yQ8M=; b=MjHlmFxTMxSVAMQ25F9XGgd2kS5VA7KB6kMlaY4aJIK+BfJnW19jcw0U 7hieOO8/6ySWgLKxHjzRDJMnoS3BPxEhNgsEA5cL1FcaVkOIxGs96toJ5 49U4wxnVCW3hhlANpNztR7ZFK52n+iIfQvIcJ4V1zAkkPZPsUVrCfWFGY g=; X-IronPort-AV: E=Sophos;i="6.07,252,1708387200"; d="scan'208";a="86872014" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80009.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 May 2024 22:33:43 +0000 Received: from EX19MTAUWA002.ant.amazon.com [10.0.7.35:10404] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.35.127:2525] with esmtp (Farcaster) id a7b5d5c6-db7c-4c2f-b004-0521eeee1ce2; Fri, 3 May 2024 22:33:43 +0000 (UTC) X-Farcaster-Flow-ID: a7b5d5c6-db7c-4c2f-b004-0521eeee1ce2 Received: from EX19D004ANA001.ant.amazon.com (10.37.240.138) by EX19MTAUWA002.ant.amazon.com (10.250.64.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:33:38 +0000 Received: from 88665a182662.ant.amazon.com (10.187.170.24) by EX19D004ANA001.ant.amazon.com (10.37.240.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:33:35 +0000 From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni CC: Kuniyuki Iwashima , Kuniyuki Iwashima , Subject: [PATCH v1 net-next 4/6] af_unix: Move wait_for_unix_gc() to unix_prepare_fpl(). Date: Fri, 3 May 2024 15:31:48 -0700 Message-ID: <20240503223150.6035-5-kuniyu@amazon.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240503223150.6035-1-kuniyu@amazon.com> References: <20240503223150.6035-1-kuniyu@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D033UWC001.ant.amazon.com (10.13.139.218) To EX19D004ANA001.ant.amazon.com (10.37.240.138) X-Patchwork-Delegate: kuba@kernel.org unix_(dgram|stream)_sendmsg() call wait_for_unix_gc() to trigger GC when the number of inflight AF_UNIX sockets is insane. This does not happen in the sane use case. If this happened, the insane process would continue sending FDs. We need not impose the duty in the normal sendmsg(), and instead, we can trigger GC in unix_prepare_fpl(), which is called when a fd of AF_UNIX socket is passed. Also, this renames wait_for_unix_gc() to __unix_schedule_gc() for the following changes. Signed-off-by: Kuniyuki Iwashima --- include/net/af_unix.h | 1 - net/unix/af_unix.c | 4 ---- net/unix/garbage.c | 9 ++++++--- 3 files changed, 6 insertions(+), 8 deletions(-) diff --git a/include/net/af_unix.h b/include/net/af_unix.h index b6eedf7650da..ebd1b3ca8906 100644 --- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -24,7 +24,6 @@ void unix_update_edges(struct unix_sock *receiver); int unix_prepare_fpl(struct scm_fp_list *fpl); void unix_destroy_fpl(struct scm_fp_list *fpl); void unix_gc(void); -void wait_for_unix_gc(struct scm_fp_list *fpl); struct unix_vertex { struct list_head edges; diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index dc1651541723..863058be35f3 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1925,8 +1925,6 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg, if (err < 0) return err; - wait_for_unix_gc(scm.fp); - err = -EOPNOTSUPP; if (msg->msg_flags&MSG_OOB) goto out; @@ -2202,8 +2200,6 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg, if (err < 0) return err; - wait_for_unix_gc(scm.fp); - err = -EOPNOTSUPP; if (msg->msg_flags & MSG_OOB) { #if IS_ENABLED(CONFIG_AF_UNIX_OOB) diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 478b2eb479a2..85c0500764d4 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -271,6 +271,8 @@ void unix_update_edges(struct unix_sock *receiver) } } +static void __unix_schedule_gc(struct scm_fp_list *fpl); + int unix_prepare_fpl(struct scm_fp_list *fpl) { struct unix_vertex *vertex; @@ -292,6 +294,8 @@ int unix_prepare_fpl(struct scm_fp_list *fpl) if (!fpl->edges) goto err; + __unix_schedule_gc(fpl); + return 0; err: @@ -607,7 +611,7 @@ void unix_gc(void) #define UNIX_INFLIGHT_TRIGGER_GC 16000 #define UNIX_INFLIGHT_SANE_USER (SCM_MAX_FD * 8) -void wait_for_unix_gc(struct scm_fp_list *fpl) +static void __unix_schedule_gc(struct scm_fp_list *fpl) { /* If number of inflight sockets is insane, * force a garbage collect right now. @@ -622,8 +626,7 @@ void wait_for_unix_gc(struct scm_fp_list *fpl) /* Penalise users who want to send AF_UNIX sockets * but whose sockets have not been received yet. */ - if (!fpl || !fpl->count_unix || - READ_ONCE(fpl->user->unix_inflight) < UNIX_INFLIGHT_SANE_USER) + if (READ_ONCE(fpl->user->unix_inflight) < UNIX_INFLIGHT_SANE_USER) return; if (READ_ONCE(gc_in_progress)) From patchwork Fri May 3 22:31:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuniyuki Iwashima X-Patchwork-Id: 13653610 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A3A884A4F for ; Fri, 3 May 2024 22:34:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=99.78.197.218 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775645; cv=none; b=Y6fRhaB5hYp3UF/kUaVlnxvwMVSdT84VliGhlUvGzzKp5uZTdYpbev4n27WOMznh5zf9twdk/CJqyWERynsZ3twVK7hhj2mI6mC6JROBs+BbHuJv1X9q6RpomNln3MHDaDhy65zDzmBlt5Km/zc1MuuhFcWywc2I2WfcBvjIVc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775645; c=relaxed/simple; bh=WK+B8Bs8viY4KE5Abjv3HZ5UGzlvDET0ffb+z7a2bNU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mWHe5KywHmLMzOjwqTYuxddT3FHc8eQtnIAQdvsHy6xL8gErKf00PjPemYQc0gAV9kRyvOH2hPkav3EGiI4ekPZCNR8c8jcXEsRTyAczMay4+C/xu33mOO22XDnn4K8nPJUs1i6z/GCeXrHOiporRD6WKFc0VmgON0rK7q6TIMw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.jp; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=F1b8Fv8O; arc=none smtp.client-ip=99.78.197.218 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.jp Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="F1b8Fv8O" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1714775645; x=1746311645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RXYH5fpf0y+XFw/VidzK3Ry5kZgM04X4YWdxYb+d/ts=; b=F1b8Fv8OU25WE0Pcx535CWYPkHaxIkmNf8zqYRYaj/Kx2idlCkkvQ0hn OFJxoGHg0Xb9GU/ZuGyjAQH7iK7trA+48hxLEK62/qsROQuylGsqWpbu3 SneiKlMb2qVP0aHNJAvHyJIbjj/wIH78kNjE1hZE4ymre5xyGxGa5iioA c=; X-IronPort-AV: E=Sophos;i="6.07,252,1708387200"; d="scan'208";a="293892173" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 May 2024 22:34:03 +0000 Received: from EX19MTAUWB001.ant.amazon.com [10.0.38.20:21792] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.12.153:2525] with esmtp (Farcaster) id 7ed52cd0-ad13-437f-b3ce-a13fd06b0614; Fri, 3 May 2024 22:34:02 +0000 (UTC) X-Farcaster-Flow-ID: 7ed52cd0-ad13-437f-b3ce-a13fd06b0614 Received: from EX19D004ANA001.ant.amazon.com (10.37.240.138) by EX19MTAUWB001.ant.amazon.com (10.250.64.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:34:01 +0000 Received: from 88665a182662.ant.amazon.com (10.187.170.24) by EX19D004ANA001.ant.amazon.com (10.37.240.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.28; Fri, 3 May 2024 22:33:59 +0000 From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni CC: Kuniyuki Iwashima , Kuniyuki Iwashima , Subject: [PATCH v1 net-next 5/6] af_unix: Schedule GC based on graph state during sendmsg(). Date: Fri, 3 May 2024 15:31:49 -0700 Message-ID: <20240503223150.6035-6-kuniyu@amazon.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240503223150.6035-1-kuniyu@amazon.com> References: <20240503223150.6035-1-kuniyu@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D035UWB003.ant.amazon.com (10.13.138.85) To EX19D004ANA001.ant.amazon.com (10.37.240.138) X-Patchwork-Delegate: kuba@kernel.org The conventional test to trigger GC was based on the number of inflight sockets. Now we have more reliable data indicating if the loop exists in the graph. When the graph state is 1. UNIX_GRAPH_NOT_CYCLIC, do not scheudle GC 2. UNIX_GRAPH_MAYBE_CYCLIC, schedule GC if unix_tot_inflight > 16384 3. UNIX_GRAPH_CYCLIC, schedule GC if unix_graph_circles > 1024 1024 might sound much smaller than 16384, but if the number of loops is larger than 1024, there must be something wrong. Signed-off-by: Kuniyuki Iwashima --- net/unix/garbage.c | 44 ++++++++++++++++++++++++++++---------------- 1 file changed, 28 insertions(+), 16 deletions(-) diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 85c0500764d4..48cea3cf4a42 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -128,7 +128,7 @@ static void unix_update_graph(struct unix_vertex *vertex) if (!vertex) return; - unix_graph_state = UNIX_GRAPH_MAYBE_CYCLIC; + WRITE_ONCE(unix_graph_state, UNIX_GRAPH_MAYBE_CYCLIC); } static LIST_HEAD(unix_unvisited_vertices); @@ -533,7 +533,8 @@ static void unix_walk_scc(struct sk_buff_head *hitlist) list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); swap(unix_vertex_unvisited_index, unix_vertex_grouped_index); - unix_graph_state = unix_graph_circles ? UNIX_GRAPH_CYCLIC : UNIX_GRAPH_NOT_CYCLIC; + WRITE_ONCE(unix_graph_state, + unix_graph_circles ? UNIX_GRAPH_CYCLIC : UNIX_GRAPH_NOT_CYCLIC); } static void unix_walk_scc_fast(struct sk_buff_head *hitlist) @@ -555,7 +556,7 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist) if (scc_dead) { unix_collect_skb(&scc, hitlist); - unix_graph_circles--; + WRITE_ONCE(unix_graph_circles, unix_graph_circles - 1); } list_del(&scc); @@ -564,7 +565,7 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist) list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); if (!unix_graph_circles) - unix_graph_state = UNIX_GRAPH_NOT_CYCLIC; + WRITE_ONCE(unix_graph_state, UNIX_GRAPH_NOT_CYCLIC); } static bool gc_in_progress; @@ -608,27 +609,38 @@ void unix_gc(void) queue_work(system_unbound_wq, &unix_gc_work); } -#define UNIX_INFLIGHT_TRIGGER_GC 16000 +#define UNIX_INFLIGHT_SANE_CIRCLES (1 << 10) +#define UNIX_INFLIGHT_SANE_SOCKETS (1 << 14) #define UNIX_INFLIGHT_SANE_USER (SCM_MAX_FD * 8) static void __unix_schedule_gc(struct scm_fp_list *fpl) { - /* If number of inflight sockets is insane, - * force a garbage collect right now. - * - * Paired with the WRITE_ONCE() in unix_inflight(), - * unix_notinflight(), and __unix_gc(). + unsigned char graph_state = READ_ONCE(unix_graph_state); + bool wait = false; + + if (graph_state == UNIX_GRAPH_NOT_CYCLIC) + return; + + /* If the number of inflight sockets or cyclic references + * is insane, schedule garbage collector if not running. */ - if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC && - !READ_ONCE(gc_in_progress)) - unix_gc(); + if (graph_state == UNIX_GRAPH_CYCLIC) { + if (READ_ONCE(unix_graph_circles) < UNIX_INFLIGHT_SANE_CIRCLES) + return; + } else { + if (READ_ONCE(unix_tot_inflight) < UNIX_INFLIGHT_SANE_SOCKETS) + return; + } /* Penalise users who want to send AF_UNIX sockets * but whose sockets have not been received yet. */ - if (READ_ONCE(fpl->user->unix_inflight) < UNIX_INFLIGHT_SANE_USER) - return; + if (READ_ONCE(fpl->user->unix_inflight) > UNIX_INFLIGHT_SANE_USER) + wait = true; + + if (!READ_ONCE(gc_in_progress)) + unix_gc(); - if (READ_ONCE(gc_in_progress)) + if (wait) flush_work(&unix_gc_work); } From patchwork Fri May 3 22:31:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuniyuki Iwashima X-Patchwork-Id: 13653611 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31ECF84A52 for ; Fri, 3 May 2024 22:34:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=207.171.190.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775670; cv=none; b=IQUlD48R/j7P+T3edHtil/idp4I1WaivPZhPYrD2jf8aYZGciigbhdbGUVyj14VYsjB6wIqEhqH6yaISFyJiTDp+CIdNwoFTjA0HRbPym2fozCgFt2O3tybjH70R6rUqczq17qbIdNKR58i4mXCE/6deytc3aZXKZGAi8hUbWQ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714775670; c=relaxed/simple; bh=ZajLOb9jV4zmgIw2oL0s2pKlMKG8s7USQqyz1aBgkwk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kYlVtA1t+t6hA+a1BLqywjPn6JyTTlgzmpKDaKtZ6+Mwvtb+tLvAcBXYxJ0OyCuIj9TXSYqwAfPTSgywzW8AVZO8l3HGVcslbpMPViP7IwgEZ848/9Eu22FwSBDMfRx/xXMhSmTBcdZxkw8Y5RkeUAa1h5U9atjbkVlbXFZmlSE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.jp; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=TjS/+F25; arc=none smtp.client-ip=207.171.190.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.jp Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="TjS/+F25" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1714775669; x=1746311669; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8vQ+M0cbMKRhEiF1UmeexocvcoDcl1UjVusgW7tFfsg=; b=TjS/+F25wbSALsuKc89RC1mA8buNP0/hqjJDMe7l1hUMGO3M6y5Mjnzg 9FB4u4VEBHX7vk19xPC/EyOqQy+vQyB7qeV7GgW2j4XnTb+WjyzSvREID 6jbKpny9/1y07xGkEgzAsosECgCgpWFLbFNSDuUxg4OFZbr+8zwny0lYw E=; X-IronPort-AV: E=Sophos;i="6.07,252,1708387200"; d="scan'208";a="342642134" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 May 2024 22:34:28 +0000 Received: from EX19MTAUWC002.ant.amazon.com [10.0.21.151:22906] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.23.7:2525] with esmtp (Farcaster) id 4f86ff23-c05b-4227-9912-c60a3abfbab8; Fri, 3 May 2024 22:34:26 +0000 (UTC) X-Farcaster-Flow-ID: 4f86ff23-c05b-4227-9912-c60a3abfbab8 Received: from EX19D004ANA001.ant.amazon.com (10.37.240.138) by EX19MTAUWC002.ant.amazon.com (10.250.64.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:34:25 +0000 Received: from 88665a182662.ant.amazon.com (10.187.170.24) by EX19D004ANA001.ant.amazon.com (10.37.240.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 3 May 2024 22:34:23 +0000 From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni CC: Kuniyuki Iwashima , Kuniyuki Iwashima , Subject: [PATCH v1 net-next 6/6] af_unix: Schedule GC only if loop exists during close(). Date: Fri, 3 May 2024 15:31:50 -0700 Message-ID: <20240503223150.6035-7-kuniyu@amazon.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240503223150.6035-1-kuniyu@amazon.com> References: <20240503223150.6035-1-kuniyu@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D040UWB004.ant.amazon.com (10.13.138.91) To EX19D004ANA001.ant.amazon.com (10.37.240.138) X-Patchwork-Delegate: kuba@kernel.org If unix_tot_inflight is not 0 when AF_UNIX socket is close()d, GC is always scheduled. However, we need not do so if we know no loop exists in the inflight graph. Signed-off-by: Kuniyuki Iwashima --- include/net/af_unix.h | 3 +-- net/unix/af_unix.c | 3 +-- net/unix/garbage.c | 30 +++++++++++++++--------------- 3 files changed, 17 insertions(+), 19 deletions(-) diff --git a/include/net/af_unix.h b/include/net/af_unix.h index ebd1b3ca8906..1270b2c08b8f 100644 --- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -17,13 +17,12 @@ static inline struct unix_sock *unix_get_socket(struct file *filp) } #endif -extern unsigned int unix_tot_inflight; void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver); void unix_del_edges(struct scm_fp_list *fpl); void unix_update_edges(struct unix_sock *receiver); int unix_prepare_fpl(struct scm_fp_list *fpl); void unix_destroy_fpl(struct scm_fp_list *fpl); -void unix_gc(void); +void unix_schedule_gc(void); struct unix_vertex { struct list_head edges; diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index 863058be35f3..b99f7170835e 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -677,8 +677,7 @@ static void unix_release_sock(struct sock *sk, int embrion) * What the above comment does talk about? --ANK(980817) */ - if (READ_ONCE(unix_tot_inflight)) - unix_gc(); /* Garbage collect fds */ + unix_schedule_gc(); } static void init_peercred(struct sock *sk) diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 48cea3cf4a42..2cecbb97882c 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -189,7 +189,7 @@ static void unix_free_vertices(struct scm_fp_list *fpl) } static DEFINE_SPINLOCK(unix_gc_lock); -unsigned int unix_tot_inflight; +static unsigned int unix_tot_inflight; void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver) { @@ -577,11 +577,6 @@ static void __unix_gc(struct work_struct *work) spin_lock(&unix_gc_lock); - if (unix_graph_state == UNIX_GRAPH_NOT_CYCLIC) { - spin_unlock(&unix_gc_lock); - goto skip_gc; - } - __skb_queue_head_init(&hitlist); if (unix_graph_state == UNIX_GRAPH_CYCLIC) @@ -597,18 +592,12 @@ static void __unix_gc(struct work_struct *work) } __skb_queue_purge(&hitlist); -skip_gc: + WRITE_ONCE(gc_in_progress, false); } static DECLARE_WORK(unix_gc_work, __unix_gc); -void unix_gc(void) -{ - WRITE_ONCE(gc_in_progress, true); - queue_work(system_unbound_wq, &unix_gc_work); -} - #define UNIX_INFLIGHT_SANE_CIRCLES (1 << 10) #define UNIX_INFLIGHT_SANE_SOCKETS (1 << 14) #define UNIX_INFLIGHT_SANE_USER (SCM_MAX_FD * 8) @@ -621,6 +610,9 @@ static void __unix_schedule_gc(struct scm_fp_list *fpl) if (graph_state == UNIX_GRAPH_NOT_CYCLIC) return; + if (!fpl) + goto schedule; + /* If the number of inflight sockets or cyclic references * is insane, schedule garbage collector if not running. */ @@ -638,9 +630,17 @@ static void __unix_schedule_gc(struct scm_fp_list *fpl) if (READ_ONCE(fpl->user->unix_inflight) > UNIX_INFLIGHT_SANE_USER) wait = true; - if (!READ_ONCE(gc_in_progress)) - unix_gc(); +schedule: + if (!READ_ONCE(gc_in_progress)) { + WRITE_ONCE(gc_in_progress, true); + queue_work(system_unbound_wq, &unix_gc_work); + } if (wait) flush_work(&unix_gc_work); } + +void unix_schedule_gc(void) +{ + __unix_schedule_gc(NULL); +}