From patchwork Fri Feb 23 21:39:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuniyuki Iwashima X-Patchwork-Id: 13570086 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11EDB1448DE for ; Fri, 23 Feb 2024 21:44:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=207.171.184.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708724677; cv=none; b=uDeoPPvE3y41n/7npN1MvUMvXxC7KzSbPQqyyHVyy5qM0b3zluDLoDh1ps6nte/T/1FqzKcO4sgsCThlR2YkKCLS2LI3Y7X4KmjB8KueOxEsajcoHYZu13ssSTGJ50fBCha38seGYqj01lVZ9fSW/syJcYqP9H/jpS7fud1S8Pk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708724677; c=relaxed/simple; bh=imzb3zUJGxxEWMmGstf3QcPI1kuidExhng1v1b6JrsU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MBbDviTcHYA5SK0iDV+IZJM8Xfm0mwvTKXb4x162iit7BgGd5AYuqztGURrUhBlKfJIQU4pYaEQsms/M+Fp88Wt2R5JViXNWpS5jYIBx/ckpRcoDJXlqVX3V0iYNieF6tNIqBY5pFRbBl0NgmzPivzumKKGqKszKrHYyeKXdytg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.jp; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=kLYayLFC; arc=none smtp.client-ip=207.171.184.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.jp Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="kLYayLFC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1708724676; x=1740260676; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1S2JQ/gn3W3vOTfgwJ3e6j5VSJK8WIP/HshmbFfYxlQ=; b=kLYayLFCSit4b7fRSXwdubmO2gPUDadc964NzaVpSzkYJomx0ULJlrjD KNEXsddeZ3ecZs6prqjmW4FXJxOEN1IUx5PDy+pKY+io9exHCa6uEtXmV PDdue1MyD2P3bANcnZxYip/vcEgtm7GbFvSGfH9SdahP2J2ooDKcYqwjD Q=; X-IronPort-AV: E=Sophos;i="6.06,180,1705363200"; d="scan'208";a="399311118" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.214]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2024 21:44:35 +0000 Received: from EX19MTAUWB001.ant.amazon.com [10.0.21.151:7346] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.39.43:2525] with esmtp (Farcaster) id 35f94793-bae6-424f-901e-963415162733; Fri, 23 Feb 2024 21:44:35 +0000 (UTC) X-Farcaster-Flow-ID: 35f94793-bae6-424f-901e-963415162733 Received: from EX19D004ANA003.ant.amazon.com (10.37.240.184) by EX19MTAUWB001.ant.amazon.com (10.250.64.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Fri, 23 Feb 2024 21:44:32 +0000 Received: from 88665a182662.ant.amazon.com.com (10.106.100.9) by EX19D004ANA003.ant.amazon.com (10.37.240.184) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Fri, 23 Feb 2024 21:44:29 +0000 From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni CC: Kuniyuki Iwashima , Kuniyuki Iwashima , Subject: [PATCH v3 net-next 10/14] af_unix: Avoid Tarjan's algorithm if unnecessary. Date: Fri, 23 Feb 2024 13:39:59 -0800 Message-ID: <20240223214003.17369-11-kuniyu@amazon.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240223214003.17369-1-kuniyu@amazon.com> References: <20240223214003.17369-1-kuniyu@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D031UWA002.ant.amazon.com (10.13.139.96) To EX19D004ANA003.ant.amazon.com (10.37.240.184) X-Patchwork-Delegate: kuba@kernel.org Once a cyclic reference is formed, we need to run GC to check if there is dead SCC. However, we do not need to run Tarjan's algorithm if we know that the shape of the inflight graph has not been changed. If an edge is added/updated/deleted and the edge's successor is inflight, we set false to unix_graph_grouped, which means we need to re-classify SCC. Once we finalise SCC, we set false to unix_graph_grouped. While unix_graph_grouped is false, we can iterate the grouped SCC using vertex->scc_entry in unix_walk_scc_fast(). list_add() and list_for_each_entry_reverse() uses seem weird, but they are to keep the vertex order consistent and make writing test easier. Signed-off-by: Kuniyuki Iwashima --- net/unix/garbage.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 28506b32dcb2..1d9a0498dec5 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -110,6 +110,7 @@ static struct unix_vertex *unix_edge_successor(struct unix_edge *edge) } static bool unix_graph_maybe_cyclic; +static bool unix_graph_grouped; static void unix_graph_update(struct unix_vertex *vertex) { @@ -120,6 +121,7 @@ static void unix_graph_update(struct unix_vertex *vertex) return; unix_graph_maybe_cyclic = true; + unix_graph_grouped = false; } static LIST_HEAD(unix_unvisited_vertices); @@ -141,6 +143,7 @@ static void unix_add_edge(struct scm_fp_list *fpl, struct unix_edge *edge) vertex->index = unix_vertex_unvisited_index; vertex->out_degree = 0; INIT_LIST_HEAD(&vertex->edges); + INIT_LIST_HEAD(&vertex->scc_entry); list_move_tail(&vertex->entry, &unix_unvisited_vertices); @@ -379,6 +382,26 @@ static void unix_walk_scc(void) list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); swap(unix_vertex_unvisited_index, unix_vertex_grouped_index); + + unix_graph_grouped = true; +} + +static void unix_walk_scc_fast(void) +{ + while (!list_empty(&unix_unvisited_vertices)) { + struct unix_vertex *vertex; + struct list_head scc; + + vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); + list_add(&scc, &vertex->scc_entry); + + list_for_each_entry_reverse(vertex, &scc, scc_entry) + list_move_tail(&vertex->entry, &unix_visited_vertices); + + list_del(&scc); + } + + list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices); } static LIST_HEAD(gc_candidates); @@ -531,7 +554,10 @@ static void __unix_gc(struct work_struct *work) if (!unix_graph_maybe_cyclic) goto skip_gc; - unix_walk_scc(); + if (unix_graph_grouped) + unix_walk_scc_fast(); + else + unix_walk_scc(); /* First, select candidates for garbage collection. Only * in-flight sockets are considered, and from those only ones