Message ID | 20230208153905.109912-1-pawel.chmielewski@intel.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [1/1] ice: Change assigning method of the CPU affinity masks | expand |
On Wed, Feb 08, 2023 at 09:18:14PM +0530, Kalesh Anakkur Purayil wrote: > On Wed, Feb 8, 2023 at 9:11 PM Pawel Chmielewski < > pawel.chmielewski@intel.com> wrote: ... > > + u16 v_idx, cpu = 0; > > > [Kalesh]: if you initialize v_idx to 0 here, you can avoid the assignment > below I would avoid doing this. The problem is that it will become harder to maintain and more error prone during development. So, please leave as is.
On Wed, Feb 08, 2023 at 04:39:05PM +0100, Pawel Chmielewski wrote: > With the introduction of sched_numa_hop_mask() and > for_each_numa_hop_mask(), the affinity masks for queue vectors can be > conveniently set by preferring the CPUs that are closest to the NUMA node > of the parent PCI device. > > Signed-off-by: Pawel Chmielewski <pawel.chmielewski@intel.com> > --- > drivers/net/ethernet/intel/ice/ice_base.c | 17 ++++++++++++++--- > 1 file changed, 14 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c > index e864634d66bc..fd3550d15c9e 100644 > --- a/drivers/net/ethernet/intel/ice/ice_base.c > +++ b/drivers/net/ethernet/intel/ice/ice_base.c > @@ -122,8 +122,6 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx) > if (vsi->type == ICE_VSI_VF) > goto out; > /* only set affinity_mask if the CPU is online */ > - if (cpu_online(v_idx)) > - cpumask_set_cpu(v_idx, &q_vector->affinity_mask); > > /* This will not be called in the driver load path because the netdev > * will not be created yet. All other cases with register the NAPI > @@ -659,8 +657,10 @@ int ice_vsi_wait_one_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx) > */ > int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) > { > + cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; > struct device *dev = ice_pf_to_dev(vsi->back); > - u16 v_idx; > + int numa_node = dev->numa_node; > + u16 v_idx, cpu = 0; > int err; > > if (vsi->q_vectors[0]) { > @@ -674,6 +674,17 @@ int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) > goto err_out; > } > > + v_idx = 0; > + for_each_numa_hop_mask(aff_mask, numa_node) { > + for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) > + if (v_idx < vsi->num_q_vectors) { > + if (cpu_online(cpu)) > + cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); > + v_idx++; > + } else goto out; > + last_aff_mask = aff_mask; > + } > + out: > return 0; > > err_out: > -- > 2.37.3 Would it make sense to increment v_idx only if matched CPU is online? It will create a less sparse array of vectors... Thanks, Yury
On Wed, Feb 08, 2023 at 08:39:20AM -0800, Yury Norov wrote: > On Wed, Feb 08, 2023 at 04:39:05PM +0100, Pawel Chmielewski wrote: ... > > + v_idx = 0; > > + for_each_numa_hop_mask(aff_mask, numa_node) { > > + for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) > > + if (v_idx < vsi->num_q_vectors) { > > + if (cpu_online(cpu)) > > + cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); > > + v_idx++; > > + } > else > goto out; In this case the inverted conditional makes more sense: if (v_idx >= vsi->num_q_vectors) goto out; if (cpu_online(cpu)) cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); v_idx++; (indentation level will be decreased). > > + last_aff_mask = aff_mask; > > + } > > + > out: > > > return 0;
Hi Pawel, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on tnguy-next-queue/dev-queue] [also build test WARNING on linus/master v6.2-rc7] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 base: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue patch link: https://lore.kernel.org/r/20230208153905.109912-1-pawel.chmielewski%40intel.com patch subject: [PATCH 1/1] ice: Change assigning method of the CPU affinity masks config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20230209/202302090307.GQOJ4jik-lkp@intel.com/config) compiler: gcc-11 (Debian 11.3.0-8) 11.3.0 reproduce (this is a W=1 build): # https://github.com/intel-lab-lkp/linux/commit/33971c3245ae75900dbc4cc9aa2b76ff9cdb534c git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 git checkout 33971c3245ae75900dbc4cc9aa2b76ff9cdb534c # save the config file mkdir build_dir && cp config build_dir/.config make W=1 O=build_dir ARCH=x86_64 olddefconfig make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/net/ If you fix the issue, kindly add following tag where applicable | Reported-by: kernel test robot <lkp@intel.com> | Link: https://lore.kernel.org/oe-kbuild-all/20230208153905.109912-1-pawel.chmielewski@intel.com All warnings (new ones prefixed by >>): drivers/net/ethernet/intel/ice/ice_base.c: In function 'ice_vsi_alloc_q_vectors': drivers/net/ethernet/intel/ice/ice_base.c:678:9: error: implicit declaration of function 'for_each_numa_hop_mask'; did you mean 'for_each_node_mask'? [-Werror=implicit-function-declaration] 678 | for_each_numa_hop_mask(aff_mask, numa_node) { | ^~~~~~~~~~~~~~~~~~~~~~ | for_each_node_mask drivers/net/ethernet/intel/ice/ice_base.c:678:52: error: expected ';' before '{' token 678 | for_each_numa_hop_mask(aff_mask, numa_node) { | ^~ | ; >> drivers/net/ethernet/intel/ice/ice_base.c:663:20: warning: unused variable 'cpu' [-Wunused-variable] 663 | u16 v_idx, cpu = 0; | ^~~ >> drivers/net/ethernet/intel/ice/ice_base.c:660:31: warning: unused variable 'last_aff_mask' [-Wunused-variable] 660 | cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; | ^~~~~~~~~~~~~ cc1: some warnings being treated as errors vim +/cpu +663 drivers/net/ethernet/intel/ice/ice_base.c 650 651 /** 652 * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors 653 * @vsi: the VSI being configured 654 * 655 * We allocate one q_vector per queue interrupt. If allocation fails we 656 * return -ENOMEM. 657 */ 658 int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) 659 { > 660 cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; 661 struct device *dev = ice_pf_to_dev(vsi->back); 662 int numa_node = dev->numa_node; > 663 u16 v_idx, cpu = 0; 664 int err; 665 666 if (vsi->q_vectors[0]) { 667 dev_dbg(dev, "VSI %d has existing q_vectors\n", vsi->vsi_num); 668 return -EEXIST; 669 } 670 671 for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++) { 672 err = ice_vsi_alloc_q_vector(vsi, v_idx); 673 if (err) 674 goto err_out; 675 } 676 677 v_idx = 0; > 678 for_each_numa_hop_mask(aff_mask, numa_node) { 679 for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) 680 if (v_idx < vsi->num_q_vectors) { 681 if (cpu_online(cpu)) 682 cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); 683 v_idx++; 684 } 685 last_aff_mask = aff_mask; 686 } 687 688 return 0; 689 690 err_out: 691 while (v_idx--) 692 ice_free_q_vector(vsi, v_idx); 693 694 dev_err(dev, "Failed to allocate %d q_vector for VSI %d, ret=%d\n", 695 vsi->num_q_vectors, vsi->vsi_num, err); 696 vsi->num_q_vectors = 0; 697 return err; 698 } 699
Hi Pawel, Thank you for the patch! Yet something to improve: [auto build test ERROR on tnguy-next-queue/dev-queue] [also build test ERROR on linus/master v6.2-rc7] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 base: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue patch link: https://lore.kernel.org/r/20230208153905.109912-1-pawel.chmielewski%40intel.com patch subject: [PATCH 1/1] ice: Change assigning method of the CPU affinity masks config: arm-randconfig-r046-20230209 (https://download.01.org/0day-ci/archive/20230209/202302090331.7L2Y93PP-lkp@intel.com/config) compiler: arm-linux-gnueabi-gcc (GCC) 12.1.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/33971c3245ae75900dbc4cc9aa2b76ff9cdb534c git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 git checkout 33971c3245ae75900dbc4cc9aa2b76ff9cdb534c # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arm olddefconfig COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/net/ethernet/intel/ice/ If you fix the issue, kindly add following tag where applicable | Reported-by: kernel test robot <lkp@intel.com> | Link: https://lore.kernel.org/oe-kbuild-all/20230208153905.109912-1-pawel.chmielewski@intel.com All errors (new ones prefixed by >>): drivers/net/ethernet/intel/ice/ice_base.c: In function 'ice_vsi_alloc_q_vectors': >> drivers/net/ethernet/intel/ice/ice_base.c:662:28: error: 'struct device' has no member named 'numa_node' 662 | int numa_node = dev->numa_node; | ^~ >> drivers/net/ethernet/intel/ice/ice_base.c:678:9: error: implicit declaration of function 'for_each_numa_hop_mask'; did you mean 'for_each_node_mask'? [-Werror=implicit-function-declaration] 678 | for_each_numa_hop_mask(aff_mask, numa_node) { | ^~~~~~~~~~~~~~~~~~~~~~ | for_each_node_mask >> drivers/net/ethernet/intel/ice/ice_base.c:678:52: error: expected ';' before '{' token 678 | for_each_numa_hop_mask(aff_mask, numa_node) { | ^~ | ; drivers/net/ethernet/intel/ice/ice_base.c:663:20: warning: unused variable 'cpu' [-Wunused-variable] 663 | u16 v_idx, cpu = 0; | ^~~ drivers/net/ethernet/intel/ice/ice_base.c:660:31: warning: unused variable 'last_aff_mask' [-Wunused-variable] 660 | cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; | ^~~~~~~~~~~~~ cc1: some warnings being treated as errors vim +662 drivers/net/ethernet/intel/ice/ice_base.c 650 651 /** 652 * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors 653 * @vsi: the VSI being configured 654 * 655 * We allocate one q_vector per queue interrupt. If allocation fails we 656 * return -ENOMEM. 657 */ 658 int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) 659 { 660 cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; 661 struct device *dev = ice_pf_to_dev(vsi->back); > 662 int numa_node = dev->numa_node; 663 u16 v_idx, cpu = 0; 664 int err; 665 666 if (vsi->q_vectors[0]) { 667 dev_dbg(dev, "VSI %d has existing q_vectors\n", vsi->vsi_num); 668 return -EEXIST; 669 } 670 671 for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++) { 672 err = ice_vsi_alloc_q_vector(vsi, v_idx); 673 if (err) 674 goto err_out; 675 } 676 677 v_idx = 0; > 678 for_each_numa_hop_mask(aff_mask, numa_node) { 679 for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) 680 if (v_idx < vsi->num_q_vectors) { 681 if (cpu_online(cpu)) 682 cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); 683 v_idx++; 684 } 685 last_aff_mask = aff_mask; 686 } 687 688 return 0; 689 690 err_out: 691 while (v_idx--) 692 ice_free_q_vector(vsi, v_idx); 693 694 dev_err(dev, "Failed to allocate %d q_vector for VSI %d, ret=%d\n", 695 vsi->num_q_vectors, vsi->vsi_num, err); 696 vsi->num_q_vectors = 0; 697 return err; 698 } 699
On Wed, 8 Feb 2023 16:39:05 +0100 Pawel Chmielewski wrote: > With the introduction of sched_numa_hop_mask() and > for_each_numa_hop_mask(), the affinity masks for queue vectors can be > conveniently set by preferring the CPUs that are closest to the NUMA node > of the parent PCI device. Damn, you had this one locked and ready, didn't you.. :)
On Thu, Feb 09, 2023 at 03:11:55AM +0800, kernel test robot wrote: > Hi Pawel, > > Thank you for the patch! Perhaps something to improve: > > [auto build test WARNING on tnguy-next-queue/dev-queue] > [also build test WARNING on linus/master v6.2-rc7] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch#_base_tree_information] > > url: https://github.com/intel-lab-lkp/linux/commits/Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 > base: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue > patch link: https://lore.kernel.org/r/20230208153905.109912-1-pawel.chmielewski%40intel.com > patch subject: [PATCH 1/1] ice: Change assigning method of the CPU affinity masks > config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20230209/202302090307.GQOJ4jik-lkp@intel.com/config) > compiler: gcc-11 (Debian 11.3.0-8) 11.3.0 > reproduce (this is a W=1 build): > # https://github.com/intel-lab-lkp/linux/commit/33971c3245ae75900dbc4cc9aa2b76ff9cdb534c > git remote add linux-review https://github.com/intel-lab-lkp/linux > git fetch --no-tags linux-review Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 > git checkout 33971c3245ae75900dbc4cc9aa2b76ff9cdb534c > # save the config file > mkdir build_dir && cp config build_dir/.config > make W=1 O=build_dir ARCH=x86_64 olddefconfig > make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/net/ > > If you fix the issue, kindly add following tag where applicable > | Reported-by: kernel test robot <lkp@intel.com> > | Link: https://lore.kernel.org/oe-kbuild-all/20230208153905.109912-1-pawel.chmielewski@intel.com Sorry that the link is wrong, which should be Link: https://lore.kernel.org/oe-kbuild-all/202302090307.GQOJ4jik-lkp@intel.com/ > > All warnings (new ones prefixed by >>): > > drivers/net/ethernet/intel/ice/ice_base.c: In function 'ice_vsi_alloc_q_vectors': > drivers/net/ethernet/intel/ice/ice_base.c:678:9: error: implicit declaration of function 'for_each_numa_hop_mask'; did you mean 'for_each_node_mask'? [-Werror=implicit-function-declaration] > 678 | for_each_numa_hop_mask(aff_mask, numa_node) { > | ^~~~~~~~~~~~~~~~~~~~~~ > | for_each_node_mask > drivers/net/ethernet/intel/ice/ice_base.c:678:52: error: expected ';' before '{' token > 678 | for_each_numa_hop_mask(aff_mask, numa_node) { > | ^~ > | ; > >> drivers/net/ethernet/intel/ice/ice_base.c:663:20: warning: unused variable 'cpu' [-Wunused-variable] > 663 | u16 v_idx, cpu = 0; > | ^~~ > >> drivers/net/ethernet/intel/ice/ice_base.c:660:31: warning: unused variable 'last_aff_mask' [-Wunused-variable] > 660 | cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; > | ^~~~~~~~~~~~~ > cc1: some warnings being treated as errors > > > vim +/cpu +663 drivers/net/ethernet/intel/ice/ice_base.c > > 650 > 651 /** > 652 * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors > 653 * @vsi: the VSI being configured > 654 * > 655 * We allocate one q_vector per queue interrupt. If allocation fails we > 656 * return -ENOMEM. > 657 */ > 658 int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) > 659 { > > 660 cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; > 661 struct device *dev = ice_pf_to_dev(vsi->back); > 662 int numa_node = dev->numa_node; > > 663 u16 v_idx, cpu = 0; > 664 int err; > 665 > 666 if (vsi->q_vectors[0]) { > 667 dev_dbg(dev, "VSI %d has existing q_vectors\n", vsi->vsi_num); > 668 return -EEXIST; > 669 } > 670 > 671 for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++) { > 672 err = ice_vsi_alloc_q_vector(vsi, v_idx); > 673 if (err) > 674 goto err_out; > 675 } > 676 > 677 v_idx = 0; > > 678 for_each_numa_hop_mask(aff_mask, numa_node) { > 679 for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) > 680 if (v_idx < vsi->num_q_vectors) { > 681 if (cpu_online(cpu)) > 682 cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); > 683 v_idx++; > 684 } > 685 last_aff_mask = aff_mask; > 686 } > 687 > 688 return 0; > 689 > 690 err_out: > 691 while (v_idx--) > 692 ice_free_q_vector(vsi, v_idx); > 693 > 694 dev_err(dev, "Failed to allocate %d q_vector for VSI %d, ret=%d\n", > 695 vsi->num_q_vectors, vsi->vsi_num, err); > 696 vsi->num_q_vectors = 0; > 697 return err; > 698 } > 699 > > -- > 0-DAY CI Kernel Test Service > https://github.com/intel/lkp-tests >
Hi Pawel, Thank you for the patch! Yet something to improve: [auto build test ERROR on tnguy-next-queue/dev-queue] [also build test ERROR on linus/master v6.2-rc7 next-20230208] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 base: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue patch link: https://lore.kernel.org/r/20230208153905.109912-1-pawel.chmielewski%40intel.com patch subject: [PATCH 1/1] ice: Change assigning method of the CPU affinity masks config: i386-randconfig-a013 (https://download.01.org/0day-ci/archive/20230209/202302091302.PaJoE0gU-lkp@intel.com/config) compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/33971c3245ae75900dbc4cc9aa2b76ff9cdb534c git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Pawel-Chmielewski/ice-Change-assigning-method-of-the-CPU-affinity-masks/20230208-234144 git checkout 33971c3245ae75900dbc4cc9aa2b76ff9cdb534c # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 olddefconfig COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/net/ethernet/intel/ice/ If you fix the issue, kindly add following tag where applicable | Reported-by: kernel test robot <lkp@intel.com> | Link: https://lore.kernel.org/oe-kbuild-all/202302091302.PaJoE0gU-lkp@intel.com All errors (new ones prefixed by >>): >> drivers/net/ethernet/intel/ice/ice_base.c:662:23: error: no member named 'numa_node' in 'struct device' int numa_node = dev->numa_node; ~~~ ^ >> drivers/net/ethernet/intel/ice/ice_base.c:678:2: error: implicit declaration of function 'for_each_numa_hop_mask' is invalid in C99 [-Werror,-Wimplicit-function-declaration] for_each_numa_hop_mask(aff_mask, numa_node) { ^ >> drivers/net/ethernet/intel/ice/ice_base.c:678:45: error: expected ';' after expression for_each_numa_hop_mask(aff_mask, numa_node) { ^ ; 3 errors generated. vim +662 drivers/net/ethernet/intel/ice/ice_base.c 650 651 /** 652 * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors 653 * @vsi: the VSI being configured 654 * 655 * We allocate one q_vector per queue interrupt. If allocation fails we 656 * return -ENOMEM. 657 */ 658 int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) 659 { 660 cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; 661 struct device *dev = ice_pf_to_dev(vsi->back); > 662 int numa_node = dev->numa_node; 663 u16 v_idx, cpu = 0; 664 int err; 665 666 if (vsi->q_vectors[0]) { 667 dev_dbg(dev, "VSI %d has existing q_vectors\n", vsi->vsi_num); 668 return -EEXIST; 669 } 670 671 for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++) { 672 err = ice_vsi_alloc_q_vector(vsi, v_idx); 673 if (err) 674 goto err_out; 675 } 676 677 v_idx = 0; > 678 for_each_numa_hop_mask(aff_mask, numa_node) { 679 for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) 680 if (v_idx < vsi->num_q_vectors) { 681 if (cpu_online(cpu)) 682 cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); 683 v_idx++; 684 } 685 last_aff_mask = aff_mask; 686 } 687 688 return 0; 689 690 err_out: 691 while (v_idx--) 692 ice_free_q_vector(vsi, v_idx); 693 694 dev_err(dev, "Failed to allocate %d q_vector for VSI %d, ret=%d\n", 695 vsi->num_q_vectors, vsi->vsi_num, err); 696 vsi->num_q_vectors = 0; 697 return err; 698 } 699
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index e864634d66bc..fd3550d15c9e 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -122,8 +122,6 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx) if (vsi->type == ICE_VSI_VF) goto out; /* only set affinity_mask if the CPU is online */ - if (cpu_online(v_idx)) - cpumask_set_cpu(v_idx, &q_vector->affinity_mask); /* This will not be called in the driver load path because the netdev * will not be created yet. All other cases with register the NAPI @@ -659,8 +657,10 @@ int ice_vsi_wait_one_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx) */ int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) { + cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask; struct device *dev = ice_pf_to_dev(vsi->back); - u16 v_idx; + int numa_node = dev->numa_node; + u16 v_idx, cpu = 0; int err; if (vsi->q_vectors[0]) { @@ -674,6 +674,17 @@ int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) goto err_out; } + v_idx = 0; + for_each_numa_hop_mask(aff_mask, numa_node) { + for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) + if (v_idx < vsi->num_q_vectors) { + if (cpu_online(cpu)) + cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask); + v_idx++; + } + last_aff_mask = aff_mask; + } + return 0; err_out:
With the introduction of sched_numa_hop_mask() and for_each_numa_hop_mask(), the affinity masks for queue vectors can be conveniently set by preferring the CPUs that are closest to the NUMA node of the parent PCI device. Signed-off-by: Pawel Chmielewski <pawel.chmielewski@intel.com> --- drivers/net/ethernet/intel/ice/ice_base.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-)