diff mbox series

[net-next,12/12] ice: Ethtool fdb_cnt stats

Message ID 20230417093412.12161-13-wojciech.drewek@intel.com (mailing list archive)
State Awaiting Upstream
Delegated to: Netdev Maintainers
Headers show
Series ice: switchdev bridge offload | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/apply fail Patch does not apply to net-next

Commit Message

Wojciech Drewek April 17, 2023, 9:34 a.m. UTC
Introduce new ethtool statistic which is 'fdb_cnt'. It
provides information about how many bridge fdbs are created on
a given netdev.

Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h            | 2 ++
 drivers/net/ethernet/intel/ice/ice_eswitch_br.c | 6 ++++++
 drivers/net/ethernet/intel/ice/ice_ethtool.c    | 1 +
 3 files changed, 9 insertions(+)

Comments

Alexander Lobakin April 21, 2023, 4:32 p.m. UTC | #1
From: Wojciech Drewek <wojciech.drewek@intel.com>
Date: Mon, 17 Apr 2023 11:34:12 +0200

> Introduce new ethtool statistic which is 'fdb_cnt'. It
> provides information about how many bridge fdbs are created on
> a given netdev.

[...]

> @@ -339,6 +340,7 @@ ice_eswitch_br_fdb_entry_delete(struct ice_esw_br *bridge,
>  	ice_eswitch_br_flow_delete(pf, fdb_entry->flow);
>  
>  	kfree(fdb_entry);
> +	vsi->fdb_cnt--;

Are FDB operations always serialized within one netdev? Because if it's
not, this probably needs to be atomic_t.

>  }
>  
>  static void

[...]

> diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
> index 8407c7175cf6..d06b2a688323 100644
> --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
> +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
> @@ -64,6 +64,7 @@ static const struct ice_stats ice_gstrings_vsi_stats[] = {
>  	ICE_VSI_STAT("tx_linearize", tx_linearize),
>  	ICE_VSI_STAT("tx_busy", tx_busy),
>  	ICE_VSI_STAT("tx_restart", tx_restart),
> +	ICE_VSI_STAT("fdb_cnt", fdb_cnt),

It's confusing to me to see it in the Ethtool stats. They're usually
counters, ice is no an exception. But this one is not, so it might give
wrong impression.
Have you considered alternatives? rtnl (iproute) or maybe even Devlink
(but I believe the former fits better)? This might be a good candidate
to become a generic stat, who knows.

>  };
>  
>  enum ice_ethtool_test_id {

Thanks,
Olek
Wojciech Drewek May 9, 2023, 12:52 p.m. UTC | #2
> -----Original Message-----
> From: Lobakin, Aleksander <aleksander.lobakin@intel.com>
> Sent: piątek, 21 kwietnia 2023 18:33
> To: Drewek, Wojciech <wojciech.drewek@intel.com>
> Cc: intel-wired-lan@lists.osuosl.org; netdev@vger.kernel.org; Ertman, David M <david.m.ertman@intel.com>;
> michal.swiatkowski@linux.intel.com; marcin.szycik@linux.intel.com; Chmielewski, Pawel <pawel.chmielewski@intel.com>;
> Samudrala, Sridhar <sridhar.samudrala@intel.com>
> Subject: Re: [PATCH net-next 12/12] ice: Ethtool fdb_cnt stats
> 
> From: Wojciech Drewek <wojciech.drewek@intel.com>
> Date: Mon, 17 Apr 2023 11:34:12 +0200
> 
> > Introduce new ethtool statistic which is 'fdb_cnt'. It
> > provides information about how many bridge fdbs are created on
> > a given netdev.
> 
> [...]
> 
> > @@ -339,6 +340,7 @@ ice_eswitch_br_fdb_entry_delete(struct ice_esw_br *bridge,
> >  	ice_eswitch_br_flow_delete(pf, fdb_entry->flow);
> >
> >  	kfree(fdb_entry);
> > +	vsi->fdb_cnt--;
> 
> Are FDB operations always serialized within one netdev? Because if it's
> not, this probably needs to be atomic_t.

All the FDB operations are done either from notification context so they are protected by
rtnl_lock or explicitly protected by us (see ice_eswitch_br_fdb_event_work, we use rtnl_lock there).

> 
> >  }
> >
> >  static void
> 
> [...]
> 
> > diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
> > index 8407c7175cf6..d06b2a688323 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
> > @@ -64,6 +64,7 @@ static const struct ice_stats ice_gstrings_vsi_stats[] = {
> >  	ICE_VSI_STAT("tx_linearize", tx_linearize),
> >  	ICE_VSI_STAT("tx_busy", tx_busy),
> >  	ICE_VSI_STAT("tx_restart", tx_restart),
> > +	ICE_VSI_STAT("fdb_cnt", fdb_cnt),
> 
> It's confusing to me to see it in the Ethtool stats. They're usually
> counters, ice is no an exception. But this one is not, so it might give
> wrong impression.
> Have you considered alternatives? rtnl (iproute) or maybe even Devlink
> (but I believe the former fits better)? This might be a good candidate
> to become a generic stat, who knows.

I'll do some research on alternatives

> 
> >  };
> >
> >  enum ice_ethtool_test_id {
> 
> Thanks,
> Olek
Alexander Lobakin May 9, 2023, 3:14 p.m. UTC | #3
From: Wojciech Drewek <wojciech.drewek@intel.com>
Date: Tue, 9 May 2023 14:52:26 +0200

> 
> 
>> -----Original Message-----
>> From: Lobakin, Aleksander <aleksander.lobakin@intel.com>
>> Sent: piątek, 21 kwietnia 2023 18:33
>> To: Drewek, Wojciech <wojciech.drewek@intel.com>
>> Cc: intel-wired-lan@lists.osuosl.org; netdev@vger.kernel.org; Ertman, David M <david.m.ertman@intel.com>;
>> michal.swiatkowski@linux.intel.com; marcin.szycik@linux.intel.com; Chmielewski, Pawel <pawel.chmielewski@intel.com>;
>> Samudrala, Sridhar <sridhar.samudrala@intel.com>
>> Subject: Re: [PATCH net-next 12/12] ice: Ethtool fdb_cnt stats
>>
>> From: Wojciech Drewek <wojciech.drewek@intel.com>
>> Date: Mon, 17 Apr 2023 11:34:12 +0200
>>
>>> Introduce new ethtool statistic which is 'fdb_cnt'. It
>>> provides information about how many bridge fdbs are created on
>>> a given netdev.
>>
>> [...]
>>
>>> @@ -339,6 +340,7 @@ ice_eswitch_br_fdb_entry_delete(struct ice_esw_br *bridge,
>>>  	ice_eswitch_br_flow_delete(pf, fdb_entry->flow);
>>>
>>>  	kfree(fdb_entry);
>>> +	vsi->fdb_cnt--;
>>
>> Are FDB operations always serialized within one netdev? Because if it's
>> not, this probably needs to be atomic_t.
> 
> All the FDB operations are done either from notification context so they are protected by
> rtnl_lock or explicitly protected by us (see ice_eswitch_br_fdb_event_work, we use rtnl_lock there).

BTW, I would replace relying on RTNL lock with own locks bit-by-bit. I
would say, it was designed more for the kernel core internal usage, but
then got abused by tons of drivers.
Sure, it's outside of this series' scope, just FYI. This one is fine for
me as long as concurrent accesses from different SMP CPUs can't happen here.

[...]

Thanks,
Olek
diff mbox series

Patch

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 489934ddfbb8..90e007942af6 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -350,6 +350,8 @@  struct ice_vsi {
 	u16 num_gfltr;
 	u16 num_bfltr;
 
+	u32 fdb_cnt;
+
 	/* RSS config */
 	u16 rss_table_size;	/* HW RSS table size */
 	u16 rss_size;		/* Allocated RSS queues */
diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
index 4a69b3a67914..cfa4324bf1a2 100644
--- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
+++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
@@ -330,6 +330,7 @@  static void
 ice_eswitch_br_fdb_entry_delete(struct ice_esw_br *bridge,
 				struct ice_esw_br_fdb_entry *fdb_entry)
 {
+	struct ice_vsi *vsi = fdb_entry->br_port->vsi;
 	struct ice_pf *pf = bridge->br_offloads->pf;
 
 	rhashtable_remove_fast(&bridge->fdb_ht, &fdb_entry->ht_node,
@@ -339,6 +340,7 @@  ice_eswitch_br_fdb_entry_delete(struct ice_esw_br *bridge,
 	ice_eswitch_br_flow_delete(pf, fdb_entry->flow);
 
 	kfree(fdb_entry);
+	vsi->fdb_cnt--;
 }
 
 static void
@@ -462,6 +464,8 @@  ice_eswitch_br_fdb_entry_create(struct net_device *netdev,
 
 	ice_eswitch_br_fdb_offload_notify(netdev, mac, vid, event);
 
+	br_port->vsi->fdb_cnt++;
+
 	return;
 
 err_fdb_insert:
@@ -941,6 +945,7 @@  ice_eswitch_br_vf_repr_port_init(struct ice_esw_br *bridge,
 	br_port->vsi_idx = br_port->vsi->idx;
 	br_port->type = ICE_ESWITCH_BR_VF_REPR_PORT;
 	repr->br_port = br_port;
+	repr->src_vsi->fdb_cnt = 0;
 
 	err = xa_insert(&bridge->ports, br_port->vsi_idx, br_port, GFP_KERNEL);
 	if (err) {
@@ -966,6 +971,7 @@  ice_eswitch_br_uplink_port_init(struct ice_esw_br *bridge, struct ice_pf *pf)
 	br_port->vsi_idx = br_port->vsi->idx;
 	br_port->type = ICE_ESWITCH_BR_UPLINK_PORT;
 	pf->br_port = br_port;
+	vsi->fdb_cnt = 0;
 
 	err = xa_insert(&bridge->ports, br_port->vsi_idx, br_port, GFP_KERNEL);
 	if (err) {
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 8407c7175cf6..d06b2a688323 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -64,6 +64,7 @@  static const struct ice_stats ice_gstrings_vsi_stats[] = {
 	ICE_VSI_STAT("tx_linearize", tx_linearize),
 	ICE_VSI_STAT("tx_busy", tx_busy),
 	ICE_VSI_STAT("tx_restart", tx_restart),
+	ICE_VSI_STAT("fdb_cnt", fdb_cnt),
 };
 
 enum ice_ethtool_test_id {