Message ID | 20240812145633.52911-4-jdamato@fastly.com (mailing list archive) |
---|---|
State | RFC |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | Cleanup IRQ affinity checks in several drivers | expand |
On Mon, Aug 12, 2024 at 7:57 AM Joe Damato <jdamato@fastly.com> wrote: > > Use napi_affinity_no_change instead of gve's internal implementation, > simplifying and centralizing the logic. > > Signed-off-by: Joe Damato <jdamato@fastly.com> > --- > drivers/net/ethernet/google/gve/gve_main.c | 14 +------------- > 1 file changed, 1 insertion(+), 13 deletions(-) > > diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c > index 661566db68c8..ad5e85b8c6a5 100644 > --- a/drivers/net/ethernet/google/gve/gve_main.c > +++ b/drivers/net/ethernet/google/gve/gve_main.c > @@ -298,18 +298,6 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) > return IRQ_HANDLED; > } > > -static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq) > -{ > - int cpu_curr = smp_processor_id(); > - const struct cpumask *aff_mask; > - > - aff_mask = irq_get_effective_affinity_mask(irq); > - if (unlikely(!aff_mask)) > - return 1; > - > - return cpumask_test_cpu(cpu_curr, aff_mask); > -} > - > int gve_napi_poll(struct napi_struct *napi, int budget) > { > struct gve_notify_block *block; > @@ -383,7 +371,7 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget) > /* Reschedule by returning budget only if already on the correct > * cpu. > */ > - if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) > + if (likely(napi_affinity_no_change(block->irq))) Nice to centralize this code! Evolving this to cache the affinity mask like the other drivers would probably also require a means to update the cache when the affinity changes. > return budget; > > /* If not on the cpu with which this queue's irq has affinity > -- > 2.25.1 >
On Tue, Aug 13, 2024 at 11:55:31AM -0700, Shailend Chand wrote: > On Mon, Aug 12, 2024 at 7:57 AM Joe Damato <jdamato@fastly.com> wrote: > > > > Use napi_affinity_no_change instead of gve's internal implementation, > > simplifying and centralizing the logic. > > > > Signed-off-by: Joe Damato <jdamato@fastly.com> > > --- > > drivers/net/ethernet/google/gve/gve_main.c | 14 +------------- > > 1 file changed, 1 insertion(+), 13 deletions(-) > > > > diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c > > index 661566db68c8..ad5e85b8c6a5 100644 > > --- a/drivers/net/ethernet/google/gve/gve_main.c > > +++ b/drivers/net/ethernet/google/gve/gve_main.c > > @@ -298,18 +298,6 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) > > return IRQ_HANDLED; > > } > > > > -static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq) > > -{ > > - int cpu_curr = smp_processor_id(); > > - const struct cpumask *aff_mask; > > - > > - aff_mask = irq_get_effective_affinity_mask(irq); > > - if (unlikely(!aff_mask)) > > - return 1; > > - > > - return cpumask_test_cpu(cpu_curr, aff_mask); > > -} > > - > > int gve_napi_poll(struct napi_struct *napi, int budget) > > { > > struct gve_notify_block *block; > > @@ -383,7 +371,7 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget) > > /* Reschedule by returning budget only if already on the correct > > * cpu. > > */ > > - if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) > > + if (likely(napi_affinity_no_change(block->irq))) > > Nice to centralize this code! Evolving this to cache the affinity mask > like the other drivers would probably also require a means to update > the cache when the affinity changes. Thanks for taking a look. The gve driver already calls irq_get_effective_affinity_mask in the hot path, so I'm planning on submitting a rfcv2 which will do this: - if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) + const struct cpumask *aff_mask = + irq_get_effective_affinity_mask(block->irq); + + if (likely(napi_affinity_no_change(aff_mask))) return budget; with a change like that there'd be no behavioral change to gve since it didn't cache before and still won't be caching after this change. I think a change can be made to gve in a separate patch set to support caching the affinity mask and does not need to be included with this change. What do you think?
On Tue, Aug 13, 2024 at 2:44 PM Joe Damato <jdamato@fastly.com> wrote: > > On Tue, Aug 13, 2024 at 11:55:31AM -0700, Shailend Chand wrote: > > On Mon, Aug 12, 2024 at 7:57 AM Joe Damato <jdamato@fastly.com> wrote: > > > > > > Use napi_affinity_no_change instead of gve's internal implementation, > > > simplifying and centralizing the logic. > > > > > > Signed-off-by: Joe Damato <jdamato@fastly.com> > > > --- > > > drivers/net/ethernet/google/gve/gve_main.c | 14 +------------- > > > 1 file changed, 1 insertion(+), 13 deletions(-) > > > > > > diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c > > > index 661566db68c8..ad5e85b8c6a5 100644 > > > --- a/drivers/net/ethernet/google/gve/gve_main.c > > > +++ b/drivers/net/ethernet/google/gve/gve_main.c > > > @@ -298,18 +298,6 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) > > > return IRQ_HANDLED; > > > } > > > > > > -static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq) > > > -{ > > > - int cpu_curr = smp_processor_id(); > > > - const struct cpumask *aff_mask; > > > - > > > - aff_mask = irq_get_effective_affinity_mask(irq); > > > - if (unlikely(!aff_mask)) > > > - return 1; > > > - > > > - return cpumask_test_cpu(cpu_curr, aff_mask); > > > -} > > > - > > > int gve_napi_poll(struct napi_struct *napi, int budget) > > > { > > > struct gve_notify_block *block; > > > @@ -383,7 +371,7 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget) > > > /* Reschedule by returning budget only if already on the correct > > > * cpu. > > > */ > > > - if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) > > > + if (likely(napi_affinity_no_change(block->irq))) > > > > Nice to centralize this code! Evolving this to cache the affinity mask > > like the other drivers would probably also require a means to update > > the cache when the affinity changes. > > Thanks for taking a look. > > The gve driver already calls irq_get_effective_affinity_mask in the > hot path, so I'm planning on submitting a rfcv2 which will do this: > > - if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) > + const struct cpumask *aff_mask = > + irq_get_effective_affinity_mask(block->irq); > + > + if (likely(napi_affinity_no_change(aff_mask))) > return budget; > > with a change like that there'd be no behavioral change to gve since > it didn't cache before and still won't be caching after this change. > > I think a change can be made to gve in a separate patch set to > support caching the affinity mask and does not need to be included > with this change. > > What do you think? Sounds good!
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 661566db68c8..ad5e85b8c6a5 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -298,18 +298,6 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) return IRQ_HANDLED; } -static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq) -{ - int cpu_curr = smp_processor_id(); - const struct cpumask *aff_mask; - - aff_mask = irq_get_effective_affinity_mask(irq); - if (unlikely(!aff_mask)) - return 1; - - return cpumask_test_cpu(cpu_curr, aff_mask); -} - int gve_napi_poll(struct napi_struct *napi, int budget) { struct gve_notify_block *block; @@ -383,7 +371,7 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget) /* Reschedule by returning budget only if already on the correct * cpu. */ - if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) + if (likely(napi_affinity_no_change(block->irq))) return budget; /* If not on the cpu with which this queue's irq has affinity
Use napi_affinity_no_change instead of gve's internal implementation, simplifying and centralizing the logic. Signed-off-by: Joe Damato <jdamato@fastly.com> --- drivers/net/ethernet/google/gve/gve_main.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-)