Message ID | 20210312162127.239795-5-alobakin@pm.me (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | gro: micro-optimize dev_gro_receive() | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for net-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | success | CCed 11 of 11 maintainers |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 10 this patch: 10 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 8 lines checked |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 10 this patch: 10 |
netdev/header_inline | success | Link |
On Fri, Mar 12, 2021 at 5:22 PM Alexander Lobakin <alobakin@pm.me> wrote: > > Most of the functions that "convert" hash value into an index > (when RPS is configured / XPS is not configured / etc.) set > reciprocal_scale() on it. Its logics is simple, but fair enough and > accounts the entire input value. > On the opposite side, 'hash & (GRO_HASH_BUCKETS - 1)' expression uses > only 3 least significant bits of the value, which is far from > optimal (especially for XOR RSS hashers, where the hashes of two > different flows may differ only by 1 bit somewhere in the middle). > > Use reciprocal_scale() here too to take the entire hash value into > account and improve flow dispersion between GRO hash buckets. > > Signed-off-by: Alexander Lobakin <alobakin@pm.me> > --- > net/core/dev.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/net/core/dev.c b/net/core/dev.c > index 65d9e7d9d1e8..bd7c9ba54623 100644 > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -5952,7 +5952,7 @@ static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) > > static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb) > { > - u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); > + u32 bucket = reciprocal_scale(skb_get_hash_raw(skb), GRO_HASH_BUCKETS); This is going to use 3 high order bits instead of 3 low-order bits. Now, had you use hash_32(skb_get_hash_raw(skb), 3), you could have claimed to use "more bits" Toeplitz already shuffles stuff. Adding a multiply here seems not needed. Please provide experimental results, because this looks unnecessary to me.
From: Eric Dumazet <edumazet@google.com> Date: Fri, 12 Mar 2021 17:33:53 +0100 > On Fri, Mar 12, 2021 at 5:22 PM Alexander Lobakin <alobakin@pm.me> wrote: > > > > Most of the functions that "convert" hash value into an index > > (when RPS is configured / XPS is not configured / etc.) set > > reciprocal_scale() on it. Its logics is simple, but fair enough and > > accounts the entire input value. > > On the opposite side, 'hash & (GRO_HASH_BUCKETS - 1)' expression uses > > only 3 least significant bits of the value, which is far from > > optimal (especially for XOR RSS hashers, where the hashes of two > > different flows may differ only by 1 bit somewhere in the middle). > > > > Use reciprocal_scale() here too to take the entire hash value into > > account and improve flow dispersion between GRO hash buckets. > > > > Signed-off-by: Alexander Lobakin <alobakin@pm.me> > > --- > > net/core/dev.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/net/core/dev.c b/net/core/dev.c > > index 65d9e7d9d1e8..bd7c9ba54623 100644 > > --- a/net/core/dev.c > > +++ b/net/core/dev.c > > @@ -5952,7 +5952,7 @@ static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) > > > > static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb) > > { > > - u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); > > + u32 bucket = reciprocal_scale(skb_get_hash_raw(skb), GRO_HASH_BUCKETS); > > This is going to use 3 high order bits instead of 3 low-order bits. We-e-ell, seems like it. > Now, had you use hash_32(skb_get_hash_raw(skb), 3), you could have > claimed to use "more bits" Nice suggestion, I'll try. If there won't be any visible improvements, I'll just drop this one. > Toeplitz already shuffles stuff. As well as CRC and others, but I feel like we shouldn't rely only on the hardware. > Adding a multiply here seems not needed. > > Please provide experimental results, because this looks unnecessary to me. Thanks, Al
diff --git a/net/core/dev.c b/net/core/dev.c index 65d9e7d9d1e8..bd7c9ba54623 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5952,7 +5952,7 @@ static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb) { - u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); + u32 bucket = reciprocal_scale(skb_get_hash_raw(skb), GRO_HASH_BUCKETS); struct gro_list *gro_list = &napi->gro_hash[bucket]; struct list_head *gro_head = &gro_list->list; struct list_head *head = &offload_base;
Most of the functions that "convert" hash value into an index (when RPS is configured / XPS is not configured / etc.) set reciprocal_scale() on it. Its logics is simple, but fair enough and accounts the entire input value. On the opposite side, 'hash & (GRO_HASH_BUCKETS - 1)' expression uses only 3 least significant bits of the value, which is far from optimal (especially for XOR RSS hashers, where the hashes of two different flows may differ only by 1 bit somewhere in the middle). Use reciprocal_scale() here too to take the entire hash value into account and improve flow dispersion between GRO hash buckets. Signed-off-by: Alexander Lobakin <alobakin@pm.me> --- net/core/dev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.30.2