Message ID | 20201104015834.mcn2eoibxf6j3ksw@skbuf (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | DSA and ptp_classify_raw: saving some CPU cycles causes worse throughput? | expand |
> My untrained eye tells me that in the 'after patch' case (the worse > one), there are less branch misses, and less cache misses. So by all > perf metrics, the throughput should be better, but it isn't. What gives? Maybe the frame has been pushed out of the L1 cache. The classify code is pulling it back in. It suffers some cache misses to get what it needs, but also in the background some speculative cache loads also happen, which are 'free'. By the time the DSA tagger is called, which also needs the header in the frame, it is all in L1 and the taggers work is fast. Without the classify, the tagger is getting a cold cache. And it ends up waiting around longer since it cannot benefit from the speculative 'free' loads? In your little patch, rather than a plain return, try calling prefetch() on the skb data so it might be warm by the time the tagger needs to manipulate it. Andrew
On Wed, 4 Nov 2020 03:58:34 +0200 Vladimir Oltean wrote: > The only problem? > Throughput is actually a few Mbps worse, and this is 100% reproducible, > doesn't appear to be measurement error. Is there any performance scaling enabled? IOW CPU freq can vary?
diff --git a/net/dsa/slave.c b/net/dsa/slave.c index c6806eef906f..e0cda3a65f28 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -511,6 +511,9 @@ static void dsa_skb_tx_timestamp(struct dsa_slave_priv *p, struct sk_buff *clone; unsigned int type; + if (likely(!(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) + return; + type = ptp_classify_raw(skb); if (type == PTP_CLASS_NONE) return;