From patchwork Wed Jul 19 20:18:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anjali Kulkarni X-Patchwork-Id: 13319460 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B03A21D28 for ; Wed, 19 Jul 2023 20:19:43 +0000 (UTC) Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 606131FFA; Wed, 19 Jul 2023 13:19:30 -0700 (PDT) Received: from pps.filterd (m0246632.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36JFOYUe000505; Wed, 19 Jul 2023 20:19:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-03-30; bh=KcCl97aZLH/jySxAYYmz9zHg6peUTTLBPuClan1ojrU=; b=Soy4kG+t635UNjvVrCS0LKC0C+rrBd1CYJY8NC4Ys4Wh8yJabbLA4eS7gXX87YsnqjKj 1j5UaMmkB2jrkOfwrQGwY44HWqqUEpuzLUq52oCmctEnJs9HoasCTbkKSMdcWiAsSwRn 1Uk4A/eRdeTyxhmT0RcrLd0e5qaKuhmH7tkcKAXBwKzFOddFdd2n2PmUbmGGc8X+cm29 /PhTwZHDnWYNYJVAC+UpnqUdH1c67tIe5xUem5EaFEartTwv9R+zjbq3ieKrCLqkat4D WEa7gHuUDkuSrNHZ2vATEfwdgthMAmneNvoWz8R+8WxLp0AARqR8luTw0g2HGQ4PY6Zb aw== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3run78gb91-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:56 +0000 Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 36JIXH8b038203; Wed, 19 Jul 2023 20:18:55 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3ruhw7dwtn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:55 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 36JKIrMc007349; Wed, 19 Jul 2023 20:18:54 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3ruhw7dwct-2; Wed, 19 Jul 2023 20:18:54 +0000 From: Anjali Kulkarni To: davem@davemloft.net Cc: Liam.Howlett@Oracle.com, akpm@linux-foundation.org, david@fries.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net, ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org, socketcan@hartkopp.net, petrm@nvidia.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, anjali.k.kulkarni@oracle.com Subject: [PATCH net-next v10 1/6] netlink: Reverse the patch which removed filtering Date: Wed, 19 Jul 2023 13:18:16 -0700 Message-ID: <20230719201821.495037-2-anjali.k.kulkarni@oracle.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> References: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-19_14,2023-07-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 phishscore=0 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307190183 X-Proofpoint-ORIG-GUID: aYWflbuq6CDsAXCH0HXePKv1uLJ99rtf X-Proofpoint-GUID: aYWflbuq6CDsAXCH0HXePKv1uLJ99rtf X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org To use filtering at the connector & cn_proc layers, we need to enable filtering in the netlink layer. This reverses the patch which removed netlink filtering - commit ID for that patch: 549017aa1bb7 (netlink: remove netlink_broadcast_filtered). Signed-off-by: Anjali Kulkarni Reviewed-by: Liam R. Howlett Acked-by: Jakub Kicinski --- include/linux/netlink.h | 5 +++++ net/netlink/af_netlink.c | 27 +++++++++++++++++++++++++-- 2 files changed, 30 insertions(+), 2 deletions(-) diff --git a/include/linux/netlink.h b/include/linux/netlink.h index 9eec3f4f5351..3a6563681b50 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -227,6 +227,11 @@ bool netlink_strict_get_check(struct sk_buff *skb); int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 portid, int nonblock); int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 portid, __u32 group, gfp_t allocation); +int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb, + __u32 portid, __u32 group, gfp_t allocation, + int (*filter)(struct sock *dsk, + struct sk_buff *skb, void *data), + void *filter_data); int netlink_set_err(struct sock *ssk, __u32 portid, __u32 group, int code); int netlink_register_notifier(struct notifier_block *nb); int netlink_unregister_notifier(struct notifier_block *nb); diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index 9c9df143a2ec..6c0bcde620e8 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -1432,6 +1432,8 @@ struct netlink_broadcast_data { int delivered; gfp_t allocation; struct sk_buff *skb, *skb2; + int (*tx_filter)(struct sock *dsk, struct sk_buff *skb, void *data); + void *tx_data; }; static void do_one_broadcast(struct sock *sk, @@ -1485,6 +1487,13 @@ static void do_one_broadcast(struct sock *sk, p->delivery_failure = 1; goto out; } + + if (p->tx_filter && p->tx_filter(sk, p->skb2, p->tx_data)) { + kfree_skb(p->skb2); + p->skb2 = NULL; + goto out; + } + if (sk_filter(sk, p->skb2)) { kfree_skb(p->skb2); p->skb2 = NULL; @@ -1507,8 +1516,12 @@ static void do_one_broadcast(struct sock *sk, sock_put(sk); } -int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid, - u32 group, gfp_t allocation) +int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb, + u32 portid, + u32 group, gfp_t allocation, + int (*filter)(struct sock *dsk, + struct sk_buff *skb, void *data), + void *filter_data) { struct net *net = sock_net(ssk); struct netlink_broadcast_data info; @@ -1527,6 +1540,8 @@ int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid, info.allocation = allocation; info.skb = skb; info.skb2 = NULL; + info.tx_filter = filter; + info.tx_data = filter_data; /* While we sleep in clone, do not allow to change socket list */ @@ -1552,6 +1567,14 @@ int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid, } return -ESRCH; } +EXPORT_SYMBOL(netlink_broadcast_filtered); + +int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid, + u32 group, gfp_t allocation) +{ + return netlink_broadcast_filtered(ssk, skb, portid, group, allocation, + NULL, NULL); +} EXPORT_SYMBOL(netlink_broadcast); struct netlink_set_err_data { From patchwork Wed Jul 19 20:18:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anjali Kulkarni X-Patchwork-Id: 13319457 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B2DB1FB4C for ; Wed, 19 Jul 2023 20:19:24 +0000 (UTC) Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37F0C135; Wed, 19 Jul 2023 13:19:20 -0700 (PDT) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36JFOrIN009107; Wed, 19 Jul 2023 20:18:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-03-30; bh=1AOtJcUyIVfRy13BpKom4GJyhdzCzwvqpJheJ5qpmHg=; b=UTKOQGG9ibgfd7KyHwFnD4rCyaGlJ3ihb/LbHPZE2Qri4771ixtb/7AKor5TXOoz9di/ BgIUa5i+ozpNikU84f6vgW4UgPx8zUki2klFr7AaG2cCw8prZHS6K0qFmKIZKhGcjaxq m1o1jIbZadhMqR61ZW9WbRrQfWZovho0Mc5Ksq/8qNX0X22OjvX94v8KTrvopbDc4U9g 4vvIcEi4KmJamq3BwbE4oevXWUeZCVSRX1yiuZYF8Ubo5l0Jxj08HYOkVp6nWTxc8Ygu Zvl9NGOc7yk3E+6oMljmzU8Y9Sx92efDpkbCchyidOybiENNjZUrK6Rhnl68tPZgkNdM sg== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3run780bwy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:57 +0000 Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 36JK3lWQ038247; Wed, 19 Jul 2023 20:18:56 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3ruhw7dwuj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:56 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 36JKIrMe007349; Wed, 19 Jul 2023 20:18:55 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3ruhw7dwct-3; Wed, 19 Jul 2023 20:18:55 +0000 From: Anjali Kulkarni To: davem@davemloft.net Cc: Liam.Howlett@Oracle.com, akpm@linux-foundation.org, david@fries.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net, ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org, socketcan@hartkopp.net, petrm@nvidia.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, anjali.k.kulkarni@oracle.com Subject: [PATCH net-next v10 2/6] netlink: Add new netlink_release function Date: Wed, 19 Jul 2023 13:18:17 -0700 Message-ID: <20230719201821.495037-3-anjali.k.kulkarni@oracle.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> References: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-19_14,2023-07-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 phishscore=0 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307190183 X-Proofpoint-GUID: _zP0rh1t2MOheSCcMoKwr3idGO3fhIFe X-Proofpoint-ORIG-GUID: _zP0rh1t2MOheSCcMoKwr3idGO3fhIFe X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org A new function netlink_release is added in netlink_sock to store the protocol's release function. This is called when the socket is deleted. This can be supplied by the protocol via the release function in netlink_kernel_cfg. This is being added for the NETLINK_CONNECTOR protocol, so it can free it's data when socket is deleted. Signed-off-by: Anjali Kulkarni Reviewed-by: Liam R. Howlett Acked-by: Jakub Kicinski --- include/linux/netlink.h | 1 + net/netlink/af_netlink.c | 6 ++++++ net/netlink/af_netlink.h | 4 ++++ 3 files changed, 11 insertions(+) diff --git a/include/linux/netlink.h b/include/linux/netlink.h index 3a6563681b50..75d7de34c908 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -50,6 +50,7 @@ struct netlink_kernel_cfg { struct mutex *cb_mutex; int (*bind)(struct net *net, int group); void (*unbind)(struct net *net, int group); + void (*release) (struct sock *sk, unsigned long *groups); }; struct sock *__netlink_kernel_create(struct net *net, int unit, diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index 6c0bcde620e8..96c605e45235 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -677,6 +677,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol, struct netlink_sock *nlk; int (*bind)(struct net *net, int group); void (*unbind)(struct net *net, int group); + void (*release)(struct sock *sock, unsigned long *groups); int err = 0; sock->state = SS_UNCONNECTED; @@ -704,6 +705,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol, cb_mutex = nl_table[protocol].cb_mutex; bind = nl_table[protocol].bind; unbind = nl_table[protocol].unbind; + release = nl_table[protocol].release; netlink_unlock_table(); if (err < 0) @@ -719,6 +721,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol, nlk->module = module; nlk->netlink_bind = bind; nlk->netlink_unbind = unbind; + nlk->netlink_release = release; out: return err; @@ -763,6 +766,8 @@ static int netlink_release(struct socket *sock) * OK. Socket is unlinked, any packets that arrive now * will be purged. */ + if (nlk->netlink_release) + nlk->netlink_release(sk, nlk->groups); /* must not acquire netlink_table_lock in any way again before unbind * and notifying genetlink is done as otherwise it might deadlock @@ -2089,6 +2094,7 @@ __netlink_kernel_create(struct net *net, int unit, struct module *module, if (cfg) { nl_table[unit].bind = cfg->bind; nl_table[unit].unbind = cfg->unbind; + nl_table[unit].release = cfg->release; nl_table[unit].flags = cfg->flags; } nl_table[unit].registered = 1; diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h index 90a3198a9b7f..fd424cd63f31 100644 --- a/net/netlink/af_netlink.h +++ b/net/netlink/af_netlink.h @@ -42,6 +42,8 @@ struct netlink_sock { void (*netlink_rcv)(struct sk_buff *skb); int (*netlink_bind)(struct net *net, int group); void (*netlink_unbind)(struct net *net, int group); + void (*netlink_release)(struct sock *sk, + unsigned long *groups); struct module *module; struct rhash_head node; @@ -64,6 +66,8 @@ struct netlink_table { struct module *module; int (*bind)(struct net *net, int group); void (*unbind)(struct net *net, int group); + void (*release)(struct sock *sk, + unsigned long *groups); int registered; }; From patchwork Wed Jul 19 20:18:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anjali Kulkarni X-Patchwork-Id: 13319456 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5210B22EE7 for ; Wed, 19 Jul 2023 20:19:17 +0000 (UTC) Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2F0413E; Wed, 19 Jul 2023 13:19:15 -0700 (PDT) Received: from pps.filterd (m0333520.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36JFOP2a029602; Wed, 19 Jul 2023 20:18:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-03-30; bh=hsgsfDT5Y08RKz2MYqsFebfaShAmnflVhZ420JhEMKI=; b=ympCay1JVbKYbYIIBnCIsZl+i+4HQgI7+PaYWBqqOllkhwQtuka0DbVgj4hKQWVaPcjX i6Skbb5oGTD80TQgx3FC+tvZZJ/caJn8Zw2bopUml7nwqsYT5dnEt4o7g3LCkm+mC6+Q fu4Q67z4maDqLqIOPyek+wO7Jbh4/9KCDbvuHd7ZW8m1hw/ikvF18kxT27ox7c09wCq2 9iyh9brexmZNbGZCCweGwbmAMuA7bLkfS/QYAu26+Tck00LFrCIaqjYthiqaBJsnOzsz rAddFJ5Zt140QfvwKHLjMx8Am25gGe1yW6gRWVCmKRj6eaWNnO/gW9rkUuY/YzGDfGLs PQ== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3run88rejj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:59 +0000 Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 36JJkgpO038307; Wed, 19 Jul 2023 20:18:58 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3ruhw7dwvg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:58 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 36JKIrMg007349; Wed, 19 Jul 2023 20:18:57 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3ruhw7dwct-4; Wed, 19 Jul 2023 20:18:57 +0000 From: Anjali Kulkarni To: davem@davemloft.net Cc: Liam.Howlett@Oracle.com, akpm@linux-foundation.org, david@fries.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net, ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org, socketcan@hartkopp.net, petrm@nvidia.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, anjali.k.kulkarni@oracle.com Subject: [PATCH net-next v10 3/6] connector/cn_proc: Add filtering to fix some bugs Date: Wed, 19 Jul 2023 13:18:18 -0700 Message-ID: <20230719201821.495037-4-anjali.k.kulkarni@oracle.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> References: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-19_14,2023-07-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 phishscore=0 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307190183 X-Proofpoint-GUID: 2IJGK2xqc42_7SBgs2blvaE52HyUCngJ X-Proofpoint-ORIG-GUID: 2IJGK2xqc42_7SBgs2blvaE52HyUCngJ X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org The current proc connector code has the foll. bugs - if there are more than one listeners for the proc connector messages, and one of them deregisters for listening using PROC_CN_MCAST_IGNORE, they will still get all proc connector messages, as long as there is another listener. Another issue is if one client calls PROC_CN_MCAST_LISTEN, and another one calls PROC_CN_MCAST_IGNORE, then both will end up not getting any messages. This patch adds filtering and drops packet if client has sent PROC_CN_MCAST_IGNORE. This data is stored in the client socket's sk_user_data. In addition, we only increment or decrement proc_event_num_listeners once per client. This fixes the above issues. cn_release is the release function added for NETLINK_CONNECTOR. It uses the newly added netlink_release function added to netlink_sock. It will free sk_user_data. Signed-off-by: Anjali Kulkarni Reviewed-by: Liam R. Howlett --- drivers/connector/cn_proc.c | 57 +++++++++++++++++++++++++++++------ drivers/connector/connector.c | 21 ++++++++++--- drivers/w1/w1_netlink.c | 6 ++-- include/linux/connector.h | 8 ++++- include/uapi/linux/cn_proc.h | 43 +++++++++++++++----------- 5 files changed, 100 insertions(+), 35 deletions(-) diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c index ccac1c453080..1ba288ed2bf7 100644 --- a/drivers/connector/cn_proc.c +++ b/drivers/connector/cn_proc.c @@ -48,6 +48,21 @@ static DEFINE_PER_CPU(struct local_event, local_event) = { .lock = INIT_LOCAL_LOCK(lock), }; +static int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data) +{ + enum proc_cn_mcast_op mc_op; + + if (!dsk) + return 0; + + mc_op = ((struct proc_input *)(dsk->sk_user_data))->mcast_op; + + if (mc_op == PROC_CN_MCAST_IGNORE) + return 1; + + return 0; +} + static inline void send_msg(struct cn_msg *msg) { local_lock(&local_event.lock); @@ -61,7 +76,8 @@ static inline void send_msg(struct cn_msg *msg) * * If cn_netlink_send() fails, the data is not sent. */ - cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT); + cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT, + cn_filter, NULL); local_unlock(&local_event.lock); } @@ -346,11 +362,9 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack) static void cn_proc_mcast_ctl(struct cn_msg *msg, struct netlink_skb_parms *nsp) { - enum proc_cn_mcast_op *mc_op = NULL; - int err = 0; - - if (msg->len != sizeof(*mc_op)) - return; + enum proc_cn_mcast_op mc_op = 0, prev_mc_op = 0; + int err = 0, initial = 0; + struct sock *sk = NULL; /* * Events are reported with respect to the initial pid @@ -367,13 +381,36 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, goto out; } - mc_op = (enum proc_cn_mcast_op *)msg->data; - switch (*mc_op) { + if (msg->len == sizeof(mc_op)) + mc_op = *((enum proc_cn_mcast_op *)msg->data); + else + return; + + if (nsp->sk) { + sk = nsp->sk; + if (sk->sk_user_data == NULL) { + sk->sk_user_data = kzalloc(sizeof(struct proc_input), + GFP_KERNEL); + if (sk->sk_user_data == NULL) { + err = ENOMEM; + goto out; + } + initial = 1; + } else { + prev_mc_op = + ((struct proc_input *)(sk->sk_user_data))->mcast_op; + } + ((struct proc_input *)(sk->sk_user_data))->mcast_op = mc_op; + } + + switch (mc_op) { case PROC_CN_MCAST_LISTEN: - atomic_inc(&proc_event_num_listeners); + if (initial || (prev_mc_op != PROC_CN_MCAST_LISTEN)) + atomic_inc(&proc_event_num_listeners); break; case PROC_CN_MCAST_IGNORE: - atomic_dec(&proc_event_num_listeners); + if (!initial && (prev_mc_op != PROC_CN_MCAST_IGNORE)) + atomic_dec(&proc_event_num_listeners); break; default: err = EINVAL; diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c index 48ec7ce6ecac..d1179df2b0ba 100644 --- a/drivers/connector/connector.c +++ b/drivers/connector/connector.c @@ -59,7 +59,9 @@ static int cn_already_initialized; * both, or if both are zero then the group is looked up and sent there. */ int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group, - gfp_t gfp_mask) + gfp_t gfp_mask, + int (*filter)(struct sock *dsk, struct sk_buff *skb, void *data), + void *filter_data) { struct cn_callback_entry *__cbq; unsigned int size; @@ -110,8 +112,9 @@ int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group, NETLINK_CB(skb).dst_group = group; if (group) - return netlink_broadcast(dev->nls, skb, portid, group, - gfp_mask); + return netlink_broadcast_filtered(dev->nls, skb, portid, group, + gfp_mask, filter, + (void *)filter_data); return netlink_unicast(dev->nls, skb, portid, !gfpflags_allow_blocking(gfp_mask)); } @@ -121,7 +124,8 @@ EXPORT_SYMBOL_GPL(cn_netlink_send_mult); int cn_netlink_send(struct cn_msg *msg, u32 portid, u32 __group, gfp_t gfp_mask) { - return cn_netlink_send_mult(msg, msg->len, portid, __group, gfp_mask); + return cn_netlink_send_mult(msg, msg->len, portid, __group, gfp_mask, + NULL, NULL); } EXPORT_SYMBOL_GPL(cn_netlink_send); @@ -162,6 +166,14 @@ static int cn_call_callback(struct sk_buff *skb) return err; } +static void cn_release(struct sock *sk, unsigned long *groups) +{ + if (groups && test_bit(CN_IDX_PROC - 1, groups)) { + kfree(sk->sk_user_data); + sk->sk_user_data = NULL; + } +} + /* * Main netlink receiving function. * @@ -249,6 +261,7 @@ static int cn_init(void) struct netlink_kernel_cfg cfg = { .groups = CN_NETLINK_USERS + 0xf, .input = cn_rx_skb, + .release = cn_release, }; dev->nls = netlink_kernel_create(&init_net, NETLINK_CONNECTOR, &cfg); diff --git a/drivers/w1/w1_netlink.c b/drivers/w1/w1_netlink.c index db110cc442b1..691978cddab7 100644 --- a/drivers/w1/w1_netlink.c +++ b/drivers/w1/w1_netlink.c @@ -65,7 +65,8 @@ static void w1_unref_block(struct w1_cb_block *block) u16 len = w1_reply_len(block); if (len) { cn_netlink_send_mult(block->first_cn, len, - block->portid, 0, GFP_KERNEL); + block->portid, 0, + GFP_KERNEL, NULL, NULL); } kfree(block); } @@ -83,7 +84,8 @@ static void w1_reply_make_space(struct w1_cb_block *block, u16 space) { u16 len = w1_reply_len(block); if (len + space >= block->maxlen) { - cn_netlink_send_mult(block->first_cn, len, block->portid, 0, GFP_KERNEL); + cn_netlink_send_mult(block->first_cn, len, block->portid, + 0, GFP_KERNEL, NULL, NULL); block->first_cn->len = 0; block->cn = NULL; block->msg = NULL; diff --git a/include/linux/connector.h b/include/linux/connector.h index 487350bb19c3..cec2d99ae902 100644 --- a/include/linux/connector.h +++ b/include/linux/connector.h @@ -90,13 +90,19 @@ void cn_del_callback(const struct cb_id *id); * If @group is not zero, then message will be delivered * to the specified group. * @gfp_mask: GFP mask. + * @filter: Filter function to be used at netlink layer. + * @filter_data:Filter data to be supplied to the filter function * * It can be safely called from softirq context, but may silently * fail under strong memory pressure. * * If there are no listeners for given group %-ESRCH can be returned. */ -int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 group, gfp_t gfp_mask); +int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, + u32 group, gfp_t gfp_mask, + int (*filter)(struct sock *dsk, struct sk_buff *skb, + void *data), + void *filter_data); /** * cn_netlink_send - Sends message to the specified groups. diff --git a/include/uapi/linux/cn_proc.h b/include/uapi/linux/cn_proc.h index db210625cee8..6a06fb424313 100644 --- a/include/uapi/linux/cn_proc.h +++ b/include/uapi/linux/cn_proc.h @@ -30,6 +30,30 @@ enum proc_cn_mcast_op { PROC_CN_MCAST_IGNORE = 2 }; +enum proc_cn_event { + /* Use successive bits so the enums can be used to record + * sets of events as well + */ + PROC_EVENT_NONE = 0x00000000, + PROC_EVENT_FORK = 0x00000001, + PROC_EVENT_EXEC = 0x00000002, + PROC_EVENT_UID = 0x00000004, + PROC_EVENT_GID = 0x00000040, + PROC_EVENT_SID = 0x00000080, + PROC_EVENT_PTRACE = 0x00000100, + PROC_EVENT_COMM = 0x00000200, + /* "next" should be 0x00000400 */ + /* "last" is the last process event: exit, + * while "next to last" is coredumping event + */ + PROC_EVENT_COREDUMP = 0x40000000, + PROC_EVENT_EXIT = 0x80000000 +}; + +struct proc_input { + enum proc_cn_mcast_op mcast_op; +}; + /* * From the user's point of view, the process * ID is the thread group ID and thread ID is the internal @@ -44,24 +68,7 @@ enum proc_cn_mcast_op { */ struct proc_event { - enum what { - /* Use successive bits so the enums can be used to record - * sets of events as well - */ - PROC_EVENT_NONE = 0x00000000, - PROC_EVENT_FORK = 0x00000001, - PROC_EVENT_EXEC = 0x00000002, - PROC_EVENT_UID = 0x00000004, - PROC_EVENT_GID = 0x00000040, - PROC_EVENT_SID = 0x00000080, - PROC_EVENT_PTRACE = 0x00000100, - PROC_EVENT_COMM = 0x00000200, - /* "next" should be 0x00000400 */ - /* "last" is the last process event: exit, - * while "next to last" is coredumping event */ - PROC_EVENT_COREDUMP = 0x40000000, - PROC_EVENT_EXIT = 0x80000000 - } what; + enum proc_cn_event what; __u32 cpu; __u64 __attribute__((aligned(8))) timestamp_ns; /* Number of nano seconds since system boot */ From patchwork Wed Jul 19 20:18:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anjali Kulkarni X-Patchwork-Id: 13319459 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F20622F01 for ; Wed, 19 Jul 2023 20:19:25 +0000 (UTC) Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2ACF31FF1; Wed, 19 Jul 2023 13:19:22 -0700 (PDT) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36JFOhPX025943; Wed, 19 Jul 2023 20:19:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-03-30; bh=/Fz4A0FJfYyPSacR9/FCi93FxvaOZZWteruwoflNKIQ=; b=rGgusMhyylFdiD9zIwTbKke67aRlQNZDI3q6tjuafpcNgI4VoswS/HqOqKynqQrHHUdH s+VipjEVRjLY+SXRMkDqIlWNxSsSankCsqzt7E6t+NS3flXo4+lZPpOKfM2mfa8c2l14 o+U9p3NF55JFr5X9dSUPOcjGff7+1GyNdsnGvYOVz2iwP0XrKFevIaad8sJC1D+8eGGq QkFkg7EEIQ8ZmAt+TX59Zq/iU1m6uyRYq/orcaxDQjoioD8ddtbtzHcZi2EP/VsIwEeE 4lwXAhTgjbePzdGF/gvI3pz6grPhtimVb3vr5tiyVQpemHoI6RxW7HxDf65fX2vMDJ0c DQ== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3run770dmr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:59 +0000 Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 36JJMMK3038186; Wed, 19 Jul 2023 20:18:59 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3ruhw7dwvu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:58 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 36JKIrMi007349; Wed, 19 Jul 2023 20:18:58 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3ruhw7dwct-5; Wed, 19 Jul 2023 20:18:58 +0000 From: Anjali Kulkarni To: davem@davemloft.net Cc: Liam.Howlett@Oracle.com, akpm@linux-foundation.org, david@fries.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net, ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org, socketcan@hartkopp.net, petrm@nvidia.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, anjali.k.kulkarni@oracle.com Subject: [PATCH net-next v10 4/6] connector/cn_proc: Performance improvements Date: Wed, 19 Jul 2023 13:18:19 -0700 Message-ID: <20230719201821.495037-5-anjali.k.kulkarni@oracle.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> References: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-19_14,2023-07-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 phishscore=0 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307190183 X-Proofpoint-GUID: fsEwraUTyjbgfegkvnp4KjkgI7XnJwpO X-Proofpoint-ORIG-GUID: fsEwraUTyjbgfegkvnp4KjkgI7XnJwpO X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org This patch adds the capability to filter messages sent by the proc connector on the event type supplied in the message from the client to the connector. The client can register to listen for an event type given in struct proc_input. This event based filteting will greatly enhance performance - handling 8K exits takes about 70ms, whereas 8K-forks + 8K-exits takes about 150ms & handling 8K-forks + 8K-exits + 8K-execs takes 200ms. There are currently 9 different types of events, and we need to listen to all of them. Also, measuring the time using pidfds for monitoring 8K process exits took much longer - 200ms, as compared to 70ms using only exit notifications of proc connector. We also add a new event type - PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening application when any process exiting, has a non-zero exit status. This will help the clients like Oracle DB, where a monitoring process wants notfications for non-zero process exits so it can cleanup after them. This kind of a new event could also be useful to other applications like Google's lmkd daemon, which needs a killed process's exit notification. The patch takes care that existing clients using old mechanism of not sending the event type work without any changes. cn_filter function checks to see if the event type being notified via proc connector matches the event type requested by client, before sending(matches) or dropping(does not match) a packet. Signed-off-by: Anjali Kulkarni Reviewed-by: Liam R. Howlett --- drivers/connector/cn_proc.c | 62 ++++++++++++++++++++++++++++++++---- include/uapi/linux/cn_proc.h | 19 +++++++++++ 2 files changed, 75 insertions(+), 6 deletions(-) diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c index 1ba288ed2bf7..dfc84d44f804 100644 --- a/drivers/connector/cn_proc.c +++ b/drivers/connector/cn_proc.c @@ -50,21 +50,45 @@ static DEFINE_PER_CPU(struct local_event, local_event) = { static int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data) { + __u32 what, exit_code, *ptr; enum proc_cn_mcast_op mc_op; + uintptr_t val; - if (!dsk) + if (!dsk || !data) return 0; + ptr = (__u32 *)data; + what = *ptr++; + exit_code = *ptr; + val = ((struct proc_input *)(dsk->sk_user_data))->event_type; mc_op = ((struct proc_input *)(dsk->sk_user_data))->mcast_op; if (mc_op == PROC_CN_MCAST_IGNORE) return 1; - return 0; + if ((__u32)val == PROC_EVENT_ALL) + return 0; + + /* + * Drop packet if we have to report only non-zero exit status + * (PROC_EVENT_NONZERO_EXIT) and exit status is 0 + */ + if (((__u32)val & PROC_EVENT_NONZERO_EXIT) && + (what == PROC_EVENT_EXIT)) { + if (exit_code) + return 0; + } + + if ((__u32)val & what) + return 0; + + return 1; } static inline void send_msg(struct cn_msg *msg) { + __u32 filter_data[2]; + local_lock(&local_event.lock); msg->seq = __this_cpu_inc_return(local_event.count) - 1; @@ -76,8 +100,16 @@ static inline void send_msg(struct cn_msg *msg) * * If cn_netlink_send() fails, the data is not sent. */ + filter_data[0] = ((struct proc_event *)msg->data)->what; + if (filter_data[0] == PROC_EVENT_EXIT) { + filter_data[1] = + ((struct proc_event *)msg->data)->event_data.exit.exit_code; + } else { + filter_data[1] = 0; + } + cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT, - cn_filter, NULL); + cn_filter, (void *)filter_data); local_unlock(&local_event.lock); } @@ -357,12 +389,15 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack) /** * cn_proc_mcast_ctl - * @data: message sent from userspace via the connector + * @msg: message sent from userspace via the connector + * @nsp: NETLINK_CB of the client's socket buffer */ static void cn_proc_mcast_ctl(struct cn_msg *msg, struct netlink_skb_parms *nsp) { enum proc_cn_mcast_op mc_op = 0, prev_mc_op = 0; + struct proc_input *pinput = NULL; + enum proc_cn_event ev_type = 0; int err = 0, initial = 0; struct sock *sk = NULL; @@ -381,10 +416,21 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, goto out; } - if (msg->len == sizeof(mc_op)) + if (msg->len == sizeof(*pinput)) { + pinput = (struct proc_input *)msg->data; + mc_op = pinput->mcast_op; + ev_type = pinput->event_type; + } else if (msg->len == sizeof(mc_op)) { mc_op = *((enum proc_cn_mcast_op *)msg->data); - else + ev_type = PROC_EVENT_ALL; + } else { return; + } + + ev_type = valid_event((enum proc_cn_event)ev_type); + + if (ev_type == PROC_EVENT_NONE) + ev_type = PROC_EVENT_ALL; if (nsp->sk) { sk = nsp->sk; @@ -400,6 +446,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, prev_mc_op = ((struct proc_input *)(sk->sk_user_data))->mcast_op; } + ((struct proc_input *)(sk->sk_user_data))->event_type = + ev_type; ((struct proc_input *)(sk->sk_user_data))->mcast_op = mc_op; } @@ -411,6 +459,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, case PROC_CN_MCAST_IGNORE: if (!initial && (prev_mc_op != PROC_CN_MCAST_IGNORE)) atomic_dec(&proc_event_num_listeners); + ((struct proc_input *)(sk->sk_user_data))->event_type = + PROC_EVENT_NONE; break; default: err = EINVAL; diff --git a/include/uapi/linux/cn_proc.h b/include/uapi/linux/cn_proc.h index 6a06fb424313..f2afb7cc4926 100644 --- a/include/uapi/linux/cn_proc.h +++ b/include/uapi/linux/cn_proc.h @@ -30,6 +30,15 @@ enum proc_cn_mcast_op { PROC_CN_MCAST_IGNORE = 2 }; +#define PROC_EVENT_ALL (PROC_EVENT_FORK | PROC_EVENT_EXEC | PROC_EVENT_UID | \ + PROC_EVENT_GID | PROC_EVENT_SID | PROC_EVENT_PTRACE | \ + PROC_EVENT_COMM | PROC_EVENT_NONZERO_EXIT | \ + PROC_EVENT_COREDUMP | PROC_EVENT_EXIT) + +/* + * If you add an entry in proc_cn_event, make sure you add it in + * PROC_EVENT_ALL above as well. + */ enum proc_cn_event { /* Use successive bits so the enums can be used to record * sets of events as well @@ -45,15 +54,25 @@ enum proc_cn_event { /* "next" should be 0x00000400 */ /* "last" is the last process event: exit, * while "next to last" is coredumping event + * before that is report only if process dies + * with non-zero exit status */ + PROC_EVENT_NONZERO_EXIT = 0x20000000, PROC_EVENT_COREDUMP = 0x40000000, PROC_EVENT_EXIT = 0x80000000 }; struct proc_input { enum proc_cn_mcast_op mcast_op; + enum proc_cn_event event_type; }; +static inline enum proc_cn_event valid_event(enum proc_cn_event ev_type) +{ + ev_type &= PROC_EVENT_ALL; + return ev_type; +} + /* * From the user's point of view, the process * ID is the thread group ID and thread ID is the internal From patchwork Wed Jul 19 20:18:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anjali Kulkarni X-Patchwork-Id: 13319458 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4CA522F01 for ; Wed, 19 Jul 2023 20:19:24 +0000 (UTC) Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEC6E1FD7; Wed, 19 Jul 2023 13:19:21 -0700 (PDT) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36JFOp3w008975; Wed, 19 Jul 2023 20:19:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-03-30; bh=nAoxJO9lmR7b76HnZn9S9UKfqm80QNkhZFMfR42xIHI=; b=nSBhlDR3ciIRY6Yc75WlTKiXW344KlBYCLWr/3EJ9equ9DJuDC2jjK5jEbIO9LK/MM/z 0tcvRRTBx4oqCBkJWS23aQ4Zht9Fwrz/WmXV0orbVYXZcg78HgeIL0b7d5rYZl7mHTP/ hY70RF67A6pTzKB5AiBoSK2JBoVAXx/xkzeyuMimbZi1q8dGEBOzyP3qYpBqrsJHA2Cl FAZRhWSpDPY9e3x75F4cZKQyyODXfcWAOsBgMOJ0pDeo7McKJM6/Pxyls+NCqCVRM+aR MvFF+jnfsVh+09Nxm8ERGeu2sWhmO4nknFT+R1Ptjj/xZAPzYMRGiZ3QozPMq+a4jEIf tA== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3run780bxd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:19:00 +0000 Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 36JKHCt2038195; Wed, 19 Jul 2023 20:19:00 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3ruhw7dwwc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:18:59 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 36JKIrMk007349; Wed, 19 Jul 2023 20:18:59 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3ruhw7dwct-6; Wed, 19 Jul 2023 20:18:59 +0000 From: Anjali Kulkarni To: davem@davemloft.net Cc: Liam.Howlett@Oracle.com, akpm@linux-foundation.org, david@fries.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net, ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org, socketcan@hartkopp.net, petrm@nvidia.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, anjali.k.kulkarni@oracle.com Subject: [PATCH net-next v10 5/6] connector/cn_proc: Allow non-root users access Date: Wed, 19 Jul 2023 13:18:20 -0700 Message-ID: <20230719201821.495037-6-anjali.k.kulkarni@oracle.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> References: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-19_14,2023-07-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 phishscore=0 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307190183 X-Proofpoint-GUID: 59-A08BFgGCZ0sr_QqQVzMrtQwgKL_xI X-Proofpoint-ORIG-GUID: 59-A08BFgGCZ0sr_QqQVzMrtQwgKL_xI X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org There were a couple of reasons for not allowing non-root users access initially - one is there was some point no proper receive buffer management in place for netlink multicast. But that should be long fixed. See link below for more context. Second is that some of the messages may contain data that is root only. But this should be handled with a finer granularity, which is being done at the protocol layer. The only problematic protocols are nf_queue and the firewall netlink. Hence, this restriction for non-root access was relaxed for NETLINK_ROUTE initially: https://lore.kernel.org/all/20020612013101.A22399@wotan.suse.de/ This restriction has also been removed for following protocols: NETLINK_KOBJECT_UEVENT, NETLINK_AUDIT, NETLINK_SOCK_DIAG, NETLINK_GENERIC, NETLINK_SELINUX. Since process connector messages are not sensitive (process fork, exit notifications etc.), and anyone can read /proc data, we can allow non-root access here. However, since process event notification is not the only consumer of NETLINK_CONNECTOR, we can make this change even more fine grained than the protocol level, by checking for multicast group within the protocol. Allow non-root access for NETLINK_CONNECTOR via NL_CFG_F_NONROOT_RECV but add new bind function cn_bind(), which allows non-root access only for CN_IDX_PROC multicast group. Signed-off-by: Anjali Kulkarni Reviewed-by: Liam R. Howlett --- drivers/connector/cn_proc.c | 6 ------ drivers/connector/connector.c | 19 +++++++++++++++++++ 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c index dfc84d44f804..05d562e9c8b1 100644 --- a/drivers/connector/cn_proc.c +++ b/drivers/connector/cn_proc.c @@ -410,12 +410,6 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, !task_is_in_init_pid_ns(current)) return; - /* Can only change if privileged. */ - if (!__netlink_ns_capable(nsp, &init_user_ns, CAP_NET_ADMIN)) { - err = EPERM; - goto out; - } - if (msg->len == sizeof(*pinput)) { pinput = (struct proc_input *)msg->data; mc_op = pinput->mcast_op; diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c index d1179df2b0ba..7f7b94f616a6 100644 --- a/drivers/connector/connector.c +++ b/drivers/connector/connector.c @@ -166,6 +166,23 @@ static int cn_call_callback(struct sk_buff *skb) return err; } +/* + * Allow non-root access for NETLINK_CONNECTOR family having CN_IDX_PROC + * multicast group. + */ +static int cn_bind(struct net *net, int group) +{ + unsigned long groups = (unsigned long) group; + + if (ns_capable(net->user_ns, CAP_NET_ADMIN)) + return 0; + + if (test_bit(CN_IDX_PROC - 1, &groups)) + return 0; + + return -EPERM; +} + static void cn_release(struct sock *sk, unsigned long *groups) { if (groups && test_bit(CN_IDX_PROC - 1, groups)) { @@ -261,6 +278,8 @@ static int cn_init(void) struct netlink_kernel_cfg cfg = { .groups = CN_NETLINK_USERS + 0xf, .input = cn_rx_skb, + .flags = NL_CFG_F_NONROOT_RECV, + .bind = cn_bind, .release = cn_release, }; From patchwork Wed Jul 19 20:18:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anjali Kulkarni X-Patchwork-Id: 13319461 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF72321D28 for ; Wed, 19 Jul 2023 20:19:45 +0000 (UTC) Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4DFA1FFB; Wed, 19 Jul 2023 13:19:30 -0700 (PDT) Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36JFOYs8027141; Wed, 19 Jul 2023 20:19:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-03-30; bh=FIiLJ5lpayMhUkOh0BqdJyvlyvcwDkfk1KO9C7kI+lM=; b=hbLLMlc4f5UH7SHjRMKP7ziX3QvUWDK6T3nT2d0F3LAz9bDn01CM9kjHIZv8rbneRYQl rKit0QH6Jwqr5kj97LFU2m1QF5I4n2HEo9TIlJqmh1ph+s2wW0jmK9ycQuvCE9fMOxyL 2VEg/XbC3C1iC81xj1uGjsKGcnaN8NB/6UsVxJ6DMn4H4+SpYLRrxjpmRoWd6qN1uXPt tYHeTLMtZ5Vqz4zYaQaU31i+vCjc7xnh0TAKCFS7KuvKHIC0mv4yBOKmWa9NHH5B3HH6 wdgtFuulLD3Ard9WvC81Lre+R+XIqpCCXJbtTjCAnZ9TnNqtwFqHR0Djmy+0lccHzv5H Tw== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3run78888d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:19:13 +0000 Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 36JK5IHf038238; Wed, 19 Jul 2023 20:19:00 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3ruhw7dwx2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Jul 2023 20:19:00 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 36JKIrMm007349; Wed, 19 Jul 2023 20:19:00 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3ruhw7dwct-7; Wed, 19 Jul 2023 20:19:00 +0000 From: Anjali Kulkarni To: davem@davemloft.net Cc: Liam.Howlett@Oracle.com, akpm@linux-foundation.org, david@fries.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net, ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org, socketcan@hartkopp.net, petrm@nvidia.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, anjali.k.kulkarni@oracle.com Subject: [PATCH net-next v10 6/6] connector/cn_proc: Selftest for proc connector Date: Wed, 19 Jul 2023 13:18:21 -0700 Message-ID: <20230719201821.495037-7-anjali.k.kulkarni@oracle.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> References: <20230719201821.495037-1-anjali.k.kulkarni@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-19_14,2023-07-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 phishscore=0 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307190183 X-Proofpoint-ORIG-GUID: NuXIfb1LyZmMdCSCS_9lEWq59BBOfXlN X-Proofpoint-GUID: NuXIfb1LyZmMdCSCS_9lEWq59BBOfXlN X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Run as ./proc_filter -f to run new filter code. Run without "-f" to run usual proc connector code without the new filtering code. Signed-off-by: Anjali Kulkarni Reviewed-by: Liam R. Howlett --- tools/testing/selftests/Makefile | 1 + tools/testing/selftests/connector/Makefile | 6 + .../testing/selftests/connector/proc_filter.c | 310 ++++++++++++++++++ 3 files changed, 317 insertions(+) create mode 100644 tools/testing/selftests/connector/Makefile create mode 100644 tools/testing/selftests/connector/proc_filter.c diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index 666b56f22a41..5c60a7cea732 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -8,6 +8,7 @@ TARGETS += cachestat TARGETS += capabilities TARGETS += cgroup TARGETS += clone3 +TARGETS += connector TARGETS += core TARGETS += cpufreq TARGETS += cpu-hotplug diff --git a/tools/testing/selftests/connector/Makefile b/tools/testing/selftests/connector/Makefile new file mode 100644 index 000000000000..21c9f3a973a0 --- /dev/null +++ b/tools/testing/selftests/connector/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +CFLAGS += -Wall + +TEST_GEN_PROGS = proc_filter + +include ../lib.mk diff --git a/tools/testing/selftests/connector/proc_filter.c b/tools/testing/selftests/connector/proc_filter.c new file mode 100644 index 000000000000..4fe8c6763fd8 --- /dev/null +++ b/tools/testing/selftests/connector/proc_filter.c @@ -0,0 +1,310 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest.h" + +#define NL_MESSAGE_SIZE (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \ + sizeof(struct proc_input)) +#define NL_MESSAGE_SIZE_NF (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \ + sizeof(int)) + +#define MAX_EVENTS 1 + +volatile static int interrupted; +static int nl_sock, ret_errno, tcount; +static struct epoll_event evn; + +static int filter; + +#ifdef ENABLE_PRINTS +#define Printf printf +#else +#define Printf ksft_print_msg +#endif + +int send_message(void *pinp) +{ + char buff[NL_MESSAGE_SIZE]; + struct nlmsghdr *hdr; + struct cn_msg *msg; + + hdr = (struct nlmsghdr *)buff; + if (filter) + hdr->nlmsg_len = NL_MESSAGE_SIZE; + else + hdr->nlmsg_len = NL_MESSAGE_SIZE_NF; + hdr->nlmsg_type = NLMSG_DONE; + hdr->nlmsg_flags = 0; + hdr->nlmsg_seq = 0; + hdr->nlmsg_pid = getpid(); + + msg = (struct cn_msg *)NLMSG_DATA(hdr); + msg->id.idx = CN_IDX_PROC; + msg->id.val = CN_VAL_PROC; + msg->seq = 0; + msg->ack = 0; + msg->flags = 0; + + if (filter) { + msg->len = sizeof(struct proc_input); + ((struct proc_input *)msg->data)->mcast_op = + ((struct proc_input *)pinp)->mcast_op; + ((struct proc_input *)msg->data)->event_type = + ((struct proc_input *)pinp)->event_type; + } else { + msg->len = sizeof(int); + *(int *)msg->data = *(enum proc_cn_mcast_op *)pinp; + } + + if (send(nl_sock, hdr, hdr->nlmsg_len, 0) == -1) { + ret_errno = errno; + perror("send failed"); + return -3; + } + return 0; +} + +int register_proc_netlink(int *efd, void *input) +{ + struct sockaddr_nl sa_nl; + int err = 0, epoll_fd; + + nl_sock = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR); + + if (nl_sock == -1) { + ret_errno = errno; + perror("socket failed"); + return -1; + } + + bzero(&sa_nl, sizeof(sa_nl)); + sa_nl.nl_family = AF_NETLINK; + sa_nl.nl_groups = CN_IDX_PROC; + sa_nl.nl_pid = getpid(); + + if (bind(nl_sock, (struct sockaddr *)&sa_nl, sizeof(sa_nl)) == -1) { + ret_errno = errno; + perror("bind failed"); + return -2; + } + + epoll_fd = epoll_create1(EPOLL_CLOEXEC); + if (epoll_fd < 0) { + ret_errno = errno; + perror("epoll_create1 failed"); + return -2; + } + + err = send_message(input); + + if (err < 0) + return err; + + evn.events = EPOLLIN; + evn.data.fd = nl_sock; + if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, nl_sock, &evn) < 0) { + ret_errno = errno; + perror("epoll_ctl failed"); + return -3; + } + *efd = epoll_fd; + return 0; +} + +static void sigint(int sig) +{ + interrupted = 1; +} + +int handle_packet(char *buff, int fd, struct proc_event *event) +{ + struct nlmsghdr *hdr; + + hdr = (struct nlmsghdr *)buff; + + if (hdr->nlmsg_type == NLMSG_ERROR) { + perror("NLMSG_ERROR error\n"); + return -3; + } else if (hdr->nlmsg_type == NLMSG_DONE) { + event = (struct proc_event *) + ((struct cn_msg *)NLMSG_DATA(hdr))->data; + tcount++; + switch (event->what) { + case PROC_EVENT_EXIT: + Printf("Exit process %d (tgid %d) with code %d, signal %d\n", + event->event_data.exit.process_pid, + event->event_data.exit.process_tgid, + event->event_data.exit.exit_code, + event->event_data.exit.exit_signal); + break; + case PROC_EVENT_FORK: + Printf("Fork process %d (tgid %d), parent %d (tgid %d)\n", + event->event_data.fork.child_pid, + event->event_data.fork.child_tgid, + event->event_data.fork.parent_pid, + event->event_data.fork.parent_tgid); + break; + case PROC_EVENT_EXEC: + Printf("Exec process %d (tgid %d)\n", + event->event_data.exec.process_pid, + event->event_data.exec.process_tgid); + break; + case PROC_EVENT_UID: + Printf("UID process %d (tgid %d) uid %d euid %d\n", + event->event_data.id.process_pid, + event->event_data.id.process_tgid, + event->event_data.id.r.ruid, + event->event_data.id.e.euid); + break; + case PROC_EVENT_GID: + Printf("GID process %d (tgid %d) gid %d egid %d\n", + event->event_data.id.process_pid, + event->event_data.id.process_tgid, + event->event_data.id.r.rgid, + event->event_data.id.e.egid); + break; + case PROC_EVENT_SID: + Printf("SID process %d (tgid %d)\n", + event->event_data.sid.process_pid, + event->event_data.sid.process_tgid); + break; + case PROC_EVENT_PTRACE: + Printf("Ptrace process %d (tgid %d), Tracer %d (tgid %d)\n", + event->event_data.ptrace.process_pid, + event->event_data.ptrace.process_tgid, + event->event_data.ptrace.tracer_pid, + event->event_data.ptrace.tracer_tgid); + break; + case PROC_EVENT_COMM: + Printf("Comm process %d (tgid %d) comm %s\n", + event->event_data.comm.process_pid, + event->event_data.comm.process_tgid, + event->event_data.comm.comm); + break; + case PROC_EVENT_COREDUMP: + Printf("Coredump process %d (tgid %d) parent %d, (tgid %d)\n", + event->event_data.coredump.process_pid, + event->event_data.coredump.process_tgid, + event->event_data.coredump.parent_pid, + event->event_data.coredump.parent_tgid); + break; + default: + break; + } + } + return 0; +} + +int handle_events(int epoll_fd, struct proc_event *pev) +{ + char buff[CONNECTOR_MAX_MSG_SIZE]; + struct epoll_event ev[MAX_EVENTS]; + int i, event_count = 0, err = 0; + + event_count = epoll_wait(epoll_fd, ev, MAX_EVENTS, -1); + if (event_count < 0) { + ret_errno = errno; + if (ret_errno != EINTR) + perror("epoll_wait failed"); + return -3; + } + for (i = 0; i < event_count; i++) { + if (!(ev[i].events & EPOLLIN)) + continue; + if (recv(ev[i].data.fd, buff, sizeof(buff), 0) == -1) { + ret_errno = errno; + perror("recv failed"); + return -3; + } + err = handle_packet(buff, ev[i].data.fd, pev); + if (err < 0) + return err; + } + return 0; +} + +int main(int argc, char *argv[]) +{ + int epoll_fd, err; + struct proc_event proc_ev; + struct proc_input input; + + signal(SIGINT, sigint); + + if (argc > 2) { + printf("Expected 0(assume no-filter) or 1 argument(-f)\n"); + exit(1); + } + + if (argc == 2) { + if (strcmp(argv[1], "-f") == 0) { + filter = 1; + } else { + printf("Valid option : -f (for filter feature)\n"); + exit(1); + } + } + + if (filter) { + input.event_type = PROC_EVENT_NONZERO_EXIT; + input.mcast_op = PROC_CN_MCAST_LISTEN; + err = register_proc_netlink(&epoll_fd, (void*)&input); + } else { + enum proc_cn_mcast_op op = PROC_CN_MCAST_LISTEN; + err = register_proc_netlink(&epoll_fd, (void*)&op); + } + + if (err < 0) { + if (err == -2) + close(nl_sock); + if (err == -3) { + close(nl_sock); + close(epoll_fd); + } + exit(1); + } + + while (!interrupted) { + err = handle_events(epoll_fd, &proc_ev); + if (err < 0) { + if (ret_errno == EINTR) + continue; + if (err == -2) + close(nl_sock); + if (err == -3) { + close(nl_sock); + close(epoll_fd); + } + exit(1); + } + } + + if (filter) { + input.mcast_op = PROC_CN_MCAST_IGNORE; + send_message((void*)&input); + } else { + enum proc_cn_mcast_op op = PROC_CN_MCAST_IGNORE; + send_message((void*)&op); + } + + close(epoll_fd); + close(nl_sock); + + printf("Done total count: %d\n", tcount); + exit(0); +}