From patchwork Thu May 18 14:47:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13246920 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F053823C6A; Thu, 18 May 2023 14:47:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B3DF9C433EF; Thu, 18 May 2023 14:47:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684421252; bh=2mUFWBMt3SD4OqfgogzFleN6wtv8trWDKDzk1pYkPak=; h=Date:From:To:Cc:Subject:Reply-To:From; b=HCQJxTzN/eSvJVCwvt/F5YsT6qfyI3NmxwySbe9d88XHUbxKMU22KGe3DlLesmj7n E8MrTx2TCEaZstsq5Em3AsdE8ol9KqPMl2n1iP4me57b8UhSbNIMsH0jD04sPISGjs O6mJrkVxBEs7i2hpRZyquofGHF82/k1Cr0Yb3+6iZEVXIPAeHkwQDKGAafST6Bx3OE yiKaihsBSVn0XwY23PdSRc1iluWYM879rdOx0n/C0mT7eZprP1gwESpDlxh5b7M2Df dvJ3dZTphkcTHzz6g+XLankCH9Pn+CdShAM4yzbyl5zW+TZ8nF28iqD3xwzwCNSiqg WJ4GmfV/Mm/5w== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 4C04BCE0CC3; Thu, 18 May 2023 07:47:32 -0700 (PDT) Date: Thu, 18 May 2023 07:47:32 -0700 From: "Paul E. McKenney" To: Martin KaFai Lau Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , bpf@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf] Use call_rcu_hurry() with synchronize_rcu_mult() Message-ID: <358bde93-4933-4305-ac42-4d6f10c97c08@paulmck-laptop> Reply-To: paulmck@kernel.org Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline X-Patchwork-Delegate: bpf@iogearbox.net The bpf_struct_ops_map_free() function must wait for both an RCU grace period and an RCU Tasks grace period, and so it passes call_rcu() and call_rcu_tasks() to synchronize_rcu_mult(). This works, but on ChromeOS and Android platforms call_rcu() can have lazy semantics, resulting in multi-second delays between call_rcu() invocation and invocation of the corresponding callback. Therefore, substitute call_rcu_hurry() for call_rcu(). Signed-off-by: Paul E. McKenney Cc: Martin KaFai Lau Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: Andrii Nakryiko Cc: Song Liu Cc: Yonghong Song Cc: John Fastabend Cc: KP Singh Cc: Stanislav Fomichev Cc: Hao Luo Cc: Jiri Olsa Cc: Cc: diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index d3f0a4825fa6..bacffd6cae60 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -634,7 +634,7 @@ static void bpf_struct_ops_map_free(struct bpf_map *map) * in the tramopline image to finish before releasing * the trampoline image. */ - synchronize_rcu_mult(call_rcu, call_rcu_tasks); + synchronize_rcu_mult(call_rcu_hurry, call_rcu_tasks); __bpf_struct_ops_map_free(map); }