Message ID | e2e24e5f4174a56c725cde3164f86a3e234f6d7f.1639157090.git.robin.murphy@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | iommu: refactor flush queues into iommu-dma | expand |
Hi Robin, I love your patch! Yet something to improve: [auto build test ERROR on joro-iommu/next] [also build test ERROR on tegra/for-next v5.16-rc4] [cannot apply to tegra-drm/drm/tegra/for-next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Robin-Murphy/iommu-refactor-flush-queues-into-iommu-dma/20211211-015635 base: https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next config: arm-randconfig-r013-20211210 (https://download.01.org/0day-ci/archive/20211211/202112110753.vYbSlMnq-lkp@intel.com/config) compiler: arm-linux-gnueabi-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/3b6adb4a8ec42d7b5c1b3b1af2c857a2375fd7e1 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Robin-Murphy/iommu-refactor-flush-queues-into-iommu-dma/20211211-015635 git checkout 3b6adb4a8ec42d7b5c1b3b1af2c857a2375fd7e1 # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm SHELL=/bin/bash drivers/gpu/drm/tegra/ drivers/iommu/ If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): drivers/gpu/drm/tegra/hub.c: In function 'tegra_display_hub_probe': >> drivers/gpu/drm/tegra/hub.c:1043:24: error: implicit declaration of function 'dma_get_mask'; did you mean 'xa_get_mark'? [-Werror=implicit-function-declaration] 1043 | u64 dma_mask = dma_get_mask(pdev->dev.parent); | ^~~~~~~~~~~~ | xa_get_mark >> drivers/gpu/drm/tegra/hub.c:1050:15: error: implicit declaration of function 'dma_coerce_mask_and_coherent' [-Werror=implicit-function-declaration] 1050 | err = dma_coerce_mask_and_coherent(&pdev->dev, dma_mask); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors -- drivers/gpu/drm/tegra/plane.c: In function 'tegra_plane_reset': >> drivers/gpu/drm/tegra/plane.c:46:42: error: 'DMA_MAPPING_ERROR' undeclared (first use in this function) 46 | state->iova[i] = DMA_MAPPING_ERROR; | ^~~~~~~~~~~~~~~~~ drivers/gpu/drm/tegra/plane.c:46:42: note: each undeclared identifier is reported only once for each function it appears in drivers/gpu/drm/tegra/plane.c: In function 'tegra_plane_atomic_duplicate_state': drivers/gpu/drm/tegra/plane.c:76:33: error: 'DMA_MAPPING_ERROR' undeclared (first use in this function) 76 | copy->iova[i] = DMA_MAPPING_ERROR; | ^~~~~~~~~~~~~~~~~ drivers/gpu/drm/tegra/plane.c: In function 'tegra_dc_pin': >> drivers/gpu/drm/tegra/plane.c:170:31: error: implicit declaration of function 'dma_map_sgtable'; did you mean 'iommu_map_sgtable'? [-Werror=implicit-function-declaration] 170 | err = dma_map_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); | ^~~~~~~~~~~~~~~ | iommu_map_sgtable >> drivers/gpu/drm/tegra/plane.c:170:61: error: 'DMA_TO_DEVICE' undeclared (first use in this function); did you mean 'MT_DEVICE'? 170 | err = dma_map_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); | ^~~~~~~~~~~~~ | MT_DEVICE >> drivers/gpu/drm/tegra/plane.c:202:25: error: implicit declaration of function 'dma_unmap_sgtable'; did you mean 'iommu_map_sgtable'? [-Werror=implicit-function-declaration] 202 | dma_unmap_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); | ^~~~~~~~~~~~~~~~~ | iommu_map_sgtable drivers/gpu/drm/tegra/plane.c:205:34: error: 'DMA_MAPPING_ERROR' undeclared (first use in this function) 205 | state->iova[i] = DMA_MAPPING_ERROR; | ^~~~~~~~~~~~~~~~~ drivers/gpu/drm/tegra/plane.c: In function 'tegra_dc_unpin': drivers/gpu/drm/tegra/plane.c:221:57: error: 'DMA_TO_DEVICE' undeclared (first use in this function); did you mean 'MT_DEVICE'? 221 | dma_unmap_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); | ^~~~~~~~~~~~~ | MT_DEVICE drivers/gpu/drm/tegra/plane.c:224:34: error: 'DMA_MAPPING_ERROR' undeclared (first use in this function) 224 | state->iova[i] = DMA_MAPPING_ERROR; | ^~~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors -- drivers/gpu/drm/tegra/dc.c: In function 'tegra_crtc_calculate_memory_bandwidth': drivers/gpu/drm/tegra/dc.c:2225:38: warning: variable 'old_state' set but not used [-Wunused-but-set-variable] 2225 | const struct drm_crtc_state *old_state; | ^~~~~~~~~ drivers/gpu/drm/tegra/dc.c: In function 'tegra_dc_probe': >> drivers/gpu/drm/tegra/dc.c:2978:24: error: implicit declaration of function 'dma_get_mask'; did you mean 'xa_get_mark'? [-Werror=implicit-function-declaration] 2978 | u64 dma_mask = dma_get_mask(pdev->dev.parent); | ^~~~~~~~~~~~ | xa_get_mark >> drivers/gpu/drm/tegra/dc.c:2982:15: error: implicit declaration of function 'dma_coerce_mask_and_coherent' [-Werror=implicit-function-declaration] 2982 | err = dma_coerce_mask_and_coherent(&pdev->dev, dma_mask); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors vim +1043 drivers/gpu/drm/tegra/hub.c c4755fb9064f64 Thierry Reding 2017-11-13 1040 c4755fb9064f64 Thierry Reding 2017-11-13 1041 static int tegra_display_hub_probe(struct platform_device *pdev) c4755fb9064f64 Thierry Reding 2017-11-13 1042 { 86044e749be77a Thierry Reding 2021-03-26 @1043 u64 dma_mask = dma_get_mask(pdev->dev.parent); 0cffbde2e318cc Thierry Reding 2018-11-29 1044 struct device_node *child = NULL; c4755fb9064f64 Thierry Reding 2017-11-13 1045 struct tegra_display_hub *hub; 0cffbde2e318cc Thierry Reding 2018-11-29 1046 struct clk *clk; c4755fb9064f64 Thierry Reding 2017-11-13 1047 unsigned int i; c4755fb9064f64 Thierry Reding 2017-11-13 1048 int err; c4755fb9064f64 Thierry Reding 2017-11-13 1049 86044e749be77a Thierry Reding 2021-03-26 @1050 err = dma_coerce_mask_and_coherent(&pdev->dev, dma_mask); 86044e749be77a Thierry Reding 2021-03-26 1051 if (err < 0) { 86044e749be77a Thierry Reding 2021-03-26 1052 dev_err(&pdev->dev, "failed to set DMA mask: %d\n", err); 86044e749be77a Thierry Reding 2021-03-26 1053 return err; 86044e749be77a Thierry Reding 2021-03-26 1054 } 86044e749be77a Thierry Reding 2021-03-26 1055 c4755fb9064f64 Thierry Reding 2017-11-13 1056 hub = devm_kzalloc(&pdev->dev, sizeof(*hub), GFP_KERNEL); c4755fb9064f64 Thierry Reding 2017-11-13 1057 if (!hub) c4755fb9064f64 Thierry Reding 2017-11-13 1058 return -ENOMEM; c4755fb9064f64 Thierry Reding 2017-11-13 1059 c4755fb9064f64 Thierry Reding 2017-11-13 1060 hub->soc = of_device_get_match_data(&pdev->dev); c4755fb9064f64 Thierry Reding 2017-11-13 1061 c4755fb9064f64 Thierry Reding 2017-11-13 1062 hub->clk_disp = devm_clk_get(&pdev->dev, "disp"); c4755fb9064f64 Thierry Reding 2017-11-13 1063 if (IS_ERR(hub->clk_disp)) { c4755fb9064f64 Thierry Reding 2017-11-13 1064 err = PTR_ERR(hub->clk_disp); c4755fb9064f64 Thierry Reding 2017-11-13 1065 return err; c4755fb9064f64 Thierry Reding 2017-11-13 1066 } c4755fb9064f64 Thierry Reding 2017-11-13 1067 5725daaab55ca0 Thierry Reding 2018-09-21 1068 if (hub->soc->supports_dsc) { c4755fb9064f64 Thierry Reding 2017-11-13 1069 hub->clk_dsc = devm_clk_get(&pdev->dev, "dsc"); c4755fb9064f64 Thierry Reding 2017-11-13 1070 if (IS_ERR(hub->clk_dsc)) { c4755fb9064f64 Thierry Reding 2017-11-13 1071 err = PTR_ERR(hub->clk_dsc); c4755fb9064f64 Thierry Reding 2017-11-13 1072 return err; c4755fb9064f64 Thierry Reding 2017-11-13 1073 } 5725daaab55ca0 Thierry Reding 2018-09-21 1074 } c4755fb9064f64 Thierry Reding 2017-11-13 1075 c4755fb9064f64 Thierry Reding 2017-11-13 1076 hub->clk_hub = devm_clk_get(&pdev->dev, "hub"); c4755fb9064f64 Thierry Reding 2017-11-13 1077 if (IS_ERR(hub->clk_hub)) { c4755fb9064f64 Thierry Reding 2017-11-13 1078 err = PTR_ERR(hub->clk_hub); c4755fb9064f64 Thierry Reding 2017-11-13 1079 return err; c4755fb9064f64 Thierry Reding 2017-11-13 1080 } c4755fb9064f64 Thierry Reding 2017-11-13 1081 c4755fb9064f64 Thierry Reding 2017-11-13 1082 hub->rst = devm_reset_control_get(&pdev->dev, "misc"); c4755fb9064f64 Thierry Reding 2017-11-13 1083 if (IS_ERR(hub->rst)) { c4755fb9064f64 Thierry Reding 2017-11-13 1084 err = PTR_ERR(hub->rst); c4755fb9064f64 Thierry Reding 2017-11-13 1085 return err; c4755fb9064f64 Thierry Reding 2017-11-13 1086 } c4755fb9064f64 Thierry Reding 2017-11-13 1087 c4755fb9064f64 Thierry Reding 2017-11-13 1088 hub->wgrps = devm_kcalloc(&pdev->dev, hub->soc->num_wgrps, c4755fb9064f64 Thierry Reding 2017-11-13 1089 sizeof(*hub->wgrps), GFP_KERNEL); c4755fb9064f64 Thierry Reding 2017-11-13 1090 if (!hub->wgrps) c4755fb9064f64 Thierry Reding 2017-11-13 1091 return -ENOMEM; c4755fb9064f64 Thierry Reding 2017-11-13 1092 c4755fb9064f64 Thierry Reding 2017-11-13 1093 for (i = 0; i < hub->soc->num_wgrps; i++) { c4755fb9064f64 Thierry Reding 2017-11-13 1094 struct tegra_windowgroup *wgrp = &hub->wgrps[i]; c4755fb9064f64 Thierry Reding 2017-11-13 1095 char id[8]; c4755fb9064f64 Thierry Reding 2017-11-13 1096 c4755fb9064f64 Thierry Reding 2017-11-13 1097 snprintf(id, sizeof(id), "wgrp%u", i); c4755fb9064f64 Thierry Reding 2017-11-13 1098 mutex_init(&wgrp->lock); c4755fb9064f64 Thierry Reding 2017-11-13 1099 wgrp->usecount = 0; c4755fb9064f64 Thierry Reding 2017-11-13 1100 wgrp->index = i; c4755fb9064f64 Thierry Reding 2017-11-13 1101 c4755fb9064f64 Thierry Reding 2017-11-13 1102 wgrp->rst = devm_reset_control_get(&pdev->dev, id); c4755fb9064f64 Thierry Reding 2017-11-13 1103 if (IS_ERR(wgrp->rst)) c4755fb9064f64 Thierry Reding 2017-11-13 1104 return PTR_ERR(wgrp->rst); c4755fb9064f64 Thierry Reding 2017-11-13 1105 c4755fb9064f64 Thierry Reding 2017-11-13 1106 err = reset_control_assert(wgrp->rst); c4755fb9064f64 Thierry Reding 2017-11-13 1107 if (err < 0) c4755fb9064f64 Thierry Reding 2017-11-13 1108 return err; c4755fb9064f64 Thierry Reding 2017-11-13 1109 } c4755fb9064f64 Thierry Reding 2017-11-13 1110 0cffbde2e318cc Thierry Reding 2018-11-29 1111 hub->num_heads = of_get_child_count(pdev->dev.of_node); 0cffbde2e318cc Thierry Reding 2018-11-29 1112 0cffbde2e318cc Thierry Reding 2018-11-29 1113 hub->clk_heads = devm_kcalloc(&pdev->dev, hub->num_heads, sizeof(clk), 0cffbde2e318cc Thierry Reding 2018-11-29 1114 GFP_KERNEL); 0cffbde2e318cc Thierry Reding 2018-11-29 1115 if (!hub->clk_heads) 0cffbde2e318cc Thierry Reding 2018-11-29 1116 return -ENOMEM; 0cffbde2e318cc Thierry Reding 2018-11-29 1117 0cffbde2e318cc Thierry Reding 2018-11-29 1118 for (i = 0; i < hub->num_heads; i++) { 0cffbde2e318cc Thierry Reding 2018-11-29 1119 child = of_get_next_child(pdev->dev.of_node, child); 0cffbde2e318cc Thierry Reding 2018-11-29 1120 if (!child) { 0cffbde2e318cc Thierry Reding 2018-11-29 1121 dev_err(&pdev->dev, "failed to find node for head %u\n", 0cffbde2e318cc Thierry Reding 2018-11-29 1122 i); 0cffbde2e318cc Thierry Reding 2018-11-29 1123 return -ENODEV; 0cffbde2e318cc Thierry Reding 2018-11-29 1124 } 0cffbde2e318cc Thierry Reding 2018-11-29 1125 0cffbde2e318cc Thierry Reding 2018-11-29 1126 clk = devm_get_clk_from_child(&pdev->dev, child, "dc"); 0cffbde2e318cc Thierry Reding 2018-11-29 1127 if (IS_ERR(clk)) { 0cffbde2e318cc Thierry Reding 2018-11-29 1128 dev_err(&pdev->dev, "failed to get clock for head %u\n", 0cffbde2e318cc Thierry Reding 2018-11-29 1129 i); 0cffbde2e318cc Thierry Reding 2018-11-29 1130 of_node_put(child); 0cffbde2e318cc Thierry Reding 2018-11-29 1131 return PTR_ERR(clk); 0cffbde2e318cc Thierry Reding 2018-11-29 1132 } 0cffbde2e318cc Thierry Reding 2018-11-29 1133 0cffbde2e318cc Thierry Reding 2018-11-29 1134 hub->clk_heads[i] = clk; 0cffbde2e318cc Thierry Reding 2018-11-29 1135 } 0cffbde2e318cc Thierry Reding 2018-11-29 1136 0cffbde2e318cc Thierry Reding 2018-11-29 1137 of_node_put(child); 0cffbde2e318cc Thierry Reding 2018-11-29 1138 c4755fb9064f64 Thierry Reding 2017-11-13 1139 /* XXX: enable clock across reset? */ c4755fb9064f64 Thierry Reding 2017-11-13 1140 err = reset_control_assert(hub->rst); c4755fb9064f64 Thierry Reding 2017-11-13 1141 if (err < 0) c4755fb9064f64 Thierry Reding 2017-11-13 1142 return err; c4755fb9064f64 Thierry Reding 2017-11-13 1143 c4755fb9064f64 Thierry Reding 2017-11-13 1144 platform_set_drvdata(pdev, hub); c4755fb9064f64 Thierry Reding 2017-11-13 1145 pm_runtime_enable(&pdev->dev); c4755fb9064f64 Thierry Reding 2017-11-13 1146 c4755fb9064f64 Thierry Reding 2017-11-13 1147 INIT_LIST_HEAD(&hub->client.list); c4755fb9064f64 Thierry Reding 2017-11-13 1148 hub->client.ops = &tegra_display_hub_ops; c4755fb9064f64 Thierry Reding 2017-11-13 1149 hub->client.dev = &pdev->dev; c4755fb9064f64 Thierry Reding 2017-11-13 1150 c4755fb9064f64 Thierry Reding 2017-11-13 1151 err = host1x_client_register(&hub->client); c4755fb9064f64 Thierry Reding 2017-11-13 1152 if (err < 0) c4755fb9064f64 Thierry Reding 2017-11-13 1153 dev_err(&pdev->dev, "failed to register host1x client: %d\n", c4755fb9064f64 Thierry Reding 2017-11-13 1154 err); c4755fb9064f64 Thierry Reding 2017-11-13 1155 a101e3dad8a90a Thierry Reding 2020-06-12 1156 err = devm_of_platform_populate(&pdev->dev); a101e3dad8a90a Thierry Reding 2020-06-12 1157 if (err < 0) a101e3dad8a90a Thierry Reding 2020-06-12 1158 goto unregister; a101e3dad8a90a Thierry Reding 2020-06-12 1159 a101e3dad8a90a Thierry Reding 2020-06-12 1160 return err; a101e3dad8a90a Thierry Reding 2020-06-12 1161 a101e3dad8a90a Thierry Reding 2020-06-12 1162 unregister: a101e3dad8a90a Thierry Reding 2020-06-12 1163 host1x_client_unregister(&hub->client); a101e3dad8a90a Thierry Reding 2020-06-12 1164 pm_runtime_disable(&pdev->dev); c4755fb9064f64 Thierry Reding 2017-11-13 1165 return err; c4755fb9064f64 Thierry Reding 2017-11-13 1166 } c4755fb9064f64 Thierry Reding 2017-11-13 1167 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hi Robin, I love your patch! Yet something to improve: [auto build test ERROR on joro-iommu/next] [also build test ERROR on tegra/for-next v5.16-rc4] [cannot apply to tegra-drm/drm/tegra/for-next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Robin-Murphy/iommu-refactor-flush-queues-into-iommu-dma/20211211-015635 base: https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next config: arm64-randconfig-r014-20211210 (https://download.01.org/0day-ci/archive/20211211/202112110744.cWU0wC1O-lkp@intel.com/config) compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 097a1cb1d5ebb3a0ec4bcaed8ba3ff6a8e33c00a) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install arm64 cross compiling tool for clang build # apt-get install binutils-aarch64-linux-gnu # https://github.com/0day-ci/linux/commit/3b6adb4a8ec42d7b5c1b3b1af2c857a2375fd7e1 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Robin-Murphy/iommu-refactor-flush-queues-into-iommu-dma/20211211-015635 git checkout 3b6adb4a8ec42d7b5c1b3b1af2c857a2375fd7e1 # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=arm64 SHELL=/bin/bash drivers/gpu/drm/tegra/ If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> drivers/gpu/drm/tegra/hub.c:1043:17: error: implicit declaration of function 'dma_get_mask' [-Werror,-Wimplicit-function-declaration] u64 dma_mask = dma_get_mask(pdev->dev.parent); ^ drivers/gpu/drm/tegra/hub.c:1043:17: note: did you mean 'xa_get_mark'? include/linux/xarray.h:354:6: note: 'xa_get_mark' declared here bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t); ^ >> drivers/gpu/drm/tegra/hub.c:1050:8: error: implicit declaration of function 'dma_coerce_mask_and_coherent' [-Werror,-Wimplicit-function-declaration] err = dma_coerce_mask_and_coherent(&pdev->dev, dma_mask); ^ 2 errors generated. -- >> drivers/gpu/drm/tegra/plane.c:46:21: error: use of undeclared identifier 'DMA_MAPPING_ERROR' state->iova[i] = DMA_MAPPING_ERROR; ^ drivers/gpu/drm/tegra/plane.c:76:19: error: use of undeclared identifier 'DMA_MAPPING_ERROR' copy->iova[i] = DMA_MAPPING_ERROR; ^ >> drivers/gpu/drm/tegra/plane.c:170:10: error: implicit declaration of function 'dma_map_sgtable' [-Werror,-Wimplicit-function-declaration] err = dma_map_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); ^ drivers/gpu/drm/tegra/plane.c:170:10: note: did you mean 'iommu_map_sgtable'? include/linux/iommu.h:1097:22: note: 'iommu_map_sgtable' declared here static inline size_t iommu_map_sgtable(struct iommu_domain *domain, ^ >> drivers/gpu/drm/tegra/plane.c:170:40: error: use of undeclared identifier 'DMA_TO_DEVICE' err = dma_map_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); ^ >> drivers/gpu/drm/tegra/plane.c:202:4: error: implicit declaration of function 'dma_unmap_sgtable' [-Werror,-Wimplicit-function-declaration] dma_unmap_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); ^ drivers/gpu/drm/tegra/plane.c:202:4: note: did you mean 'iommu_map_sgtable'? include/linux/iommu.h:1097:22: note: 'iommu_map_sgtable' declared here static inline size_t iommu_map_sgtable(struct iommu_domain *domain, ^ drivers/gpu/drm/tegra/plane.c:202:36: error: use of undeclared identifier 'DMA_TO_DEVICE' dma_unmap_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); ^ drivers/gpu/drm/tegra/plane.c:205:20: error: use of undeclared identifier 'DMA_MAPPING_ERROR' state->iova[i] = DMA_MAPPING_ERROR; ^ drivers/gpu/drm/tegra/plane.c:221:4: error: implicit declaration of function 'dma_unmap_sgtable' [-Werror,-Wimplicit-function-declaration] dma_unmap_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); ^ drivers/gpu/drm/tegra/plane.c:221:36: error: use of undeclared identifier 'DMA_TO_DEVICE' dma_unmap_sgtable(dc->dev, sgt, DMA_TO_DEVICE, 0); ^ drivers/gpu/drm/tegra/plane.c:224:20: error: use of undeclared identifier 'DMA_MAPPING_ERROR' state->iova[i] = DMA_MAPPING_ERROR; ^ 10 errors generated. -- drivers/gpu/drm/tegra/dc.c:2225:31: warning: variable 'old_state' set but not used [-Wunused-but-set-variable] const struct drm_crtc_state *old_state; ^ >> drivers/gpu/drm/tegra/dc.c:2978:17: error: implicit declaration of function 'dma_get_mask' [-Werror,-Wimplicit-function-declaration] u64 dma_mask = dma_get_mask(pdev->dev.parent); ^ drivers/gpu/drm/tegra/dc.c:2978:17: note: did you mean 'xa_get_mark'? include/linux/xarray.h:354:6: note: 'xa_get_mark' declared here bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t); ^ >> drivers/gpu/drm/tegra/dc.c:2982:8: error: implicit declaration of function 'dma_coerce_mask_and_coherent' [-Werror,-Wimplicit-function-declaration] err = dma_coerce_mask_and_coherent(&pdev->dev, dma_mask); ^ 1 warning and 2 errors generated. vim +/dma_get_mask +1043 drivers/gpu/drm/tegra/hub.c c4755fb9064f640 Thierry Reding 2017-11-13 1040 c4755fb9064f640 Thierry Reding 2017-11-13 1041 static int tegra_display_hub_probe(struct platform_device *pdev) c4755fb9064f640 Thierry Reding 2017-11-13 1042 { 86044e749be77a3 Thierry Reding 2021-03-26 @1043 u64 dma_mask = dma_get_mask(pdev->dev.parent); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1044 struct device_node *child = NULL; c4755fb9064f640 Thierry Reding 2017-11-13 1045 struct tegra_display_hub *hub; 0cffbde2e318cc1 Thierry Reding 2018-11-29 1046 struct clk *clk; c4755fb9064f640 Thierry Reding 2017-11-13 1047 unsigned int i; c4755fb9064f640 Thierry Reding 2017-11-13 1048 int err; c4755fb9064f640 Thierry Reding 2017-11-13 1049 86044e749be77a3 Thierry Reding 2021-03-26 @1050 err = dma_coerce_mask_and_coherent(&pdev->dev, dma_mask); 86044e749be77a3 Thierry Reding 2021-03-26 1051 if (err < 0) { 86044e749be77a3 Thierry Reding 2021-03-26 1052 dev_err(&pdev->dev, "failed to set DMA mask: %d\n", err); 86044e749be77a3 Thierry Reding 2021-03-26 1053 return err; 86044e749be77a3 Thierry Reding 2021-03-26 1054 } 86044e749be77a3 Thierry Reding 2021-03-26 1055 c4755fb9064f640 Thierry Reding 2017-11-13 1056 hub = devm_kzalloc(&pdev->dev, sizeof(*hub), GFP_KERNEL); c4755fb9064f640 Thierry Reding 2017-11-13 1057 if (!hub) c4755fb9064f640 Thierry Reding 2017-11-13 1058 return -ENOMEM; c4755fb9064f640 Thierry Reding 2017-11-13 1059 c4755fb9064f640 Thierry Reding 2017-11-13 1060 hub->soc = of_device_get_match_data(&pdev->dev); c4755fb9064f640 Thierry Reding 2017-11-13 1061 c4755fb9064f640 Thierry Reding 2017-11-13 1062 hub->clk_disp = devm_clk_get(&pdev->dev, "disp"); c4755fb9064f640 Thierry Reding 2017-11-13 1063 if (IS_ERR(hub->clk_disp)) { c4755fb9064f640 Thierry Reding 2017-11-13 1064 err = PTR_ERR(hub->clk_disp); c4755fb9064f640 Thierry Reding 2017-11-13 1065 return err; c4755fb9064f640 Thierry Reding 2017-11-13 1066 } c4755fb9064f640 Thierry Reding 2017-11-13 1067 5725daaab55ca02 Thierry Reding 2018-09-21 1068 if (hub->soc->supports_dsc) { c4755fb9064f640 Thierry Reding 2017-11-13 1069 hub->clk_dsc = devm_clk_get(&pdev->dev, "dsc"); c4755fb9064f640 Thierry Reding 2017-11-13 1070 if (IS_ERR(hub->clk_dsc)) { c4755fb9064f640 Thierry Reding 2017-11-13 1071 err = PTR_ERR(hub->clk_dsc); c4755fb9064f640 Thierry Reding 2017-11-13 1072 return err; c4755fb9064f640 Thierry Reding 2017-11-13 1073 } 5725daaab55ca02 Thierry Reding 2018-09-21 1074 } c4755fb9064f640 Thierry Reding 2017-11-13 1075 c4755fb9064f640 Thierry Reding 2017-11-13 1076 hub->clk_hub = devm_clk_get(&pdev->dev, "hub"); c4755fb9064f640 Thierry Reding 2017-11-13 1077 if (IS_ERR(hub->clk_hub)) { c4755fb9064f640 Thierry Reding 2017-11-13 1078 err = PTR_ERR(hub->clk_hub); c4755fb9064f640 Thierry Reding 2017-11-13 1079 return err; c4755fb9064f640 Thierry Reding 2017-11-13 1080 } c4755fb9064f640 Thierry Reding 2017-11-13 1081 c4755fb9064f640 Thierry Reding 2017-11-13 1082 hub->rst = devm_reset_control_get(&pdev->dev, "misc"); c4755fb9064f640 Thierry Reding 2017-11-13 1083 if (IS_ERR(hub->rst)) { c4755fb9064f640 Thierry Reding 2017-11-13 1084 err = PTR_ERR(hub->rst); c4755fb9064f640 Thierry Reding 2017-11-13 1085 return err; c4755fb9064f640 Thierry Reding 2017-11-13 1086 } c4755fb9064f640 Thierry Reding 2017-11-13 1087 c4755fb9064f640 Thierry Reding 2017-11-13 1088 hub->wgrps = devm_kcalloc(&pdev->dev, hub->soc->num_wgrps, c4755fb9064f640 Thierry Reding 2017-11-13 1089 sizeof(*hub->wgrps), GFP_KERNEL); c4755fb9064f640 Thierry Reding 2017-11-13 1090 if (!hub->wgrps) c4755fb9064f640 Thierry Reding 2017-11-13 1091 return -ENOMEM; c4755fb9064f640 Thierry Reding 2017-11-13 1092 c4755fb9064f640 Thierry Reding 2017-11-13 1093 for (i = 0; i < hub->soc->num_wgrps; i++) { c4755fb9064f640 Thierry Reding 2017-11-13 1094 struct tegra_windowgroup *wgrp = &hub->wgrps[i]; c4755fb9064f640 Thierry Reding 2017-11-13 1095 char id[8]; c4755fb9064f640 Thierry Reding 2017-11-13 1096 c4755fb9064f640 Thierry Reding 2017-11-13 1097 snprintf(id, sizeof(id), "wgrp%u", i); c4755fb9064f640 Thierry Reding 2017-11-13 1098 mutex_init(&wgrp->lock); c4755fb9064f640 Thierry Reding 2017-11-13 1099 wgrp->usecount = 0; c4755fb9064f640 Thierry Reding 2017-11-13 1100 wgrp->index = i; c4755fb9064f640 Thierry Reding 2017-11-13 1101 c4755fb9064f640 Thierry Reding 2017-11-13 1102 wgrp->rst = devm_reset_control_get(&pdev->dev, id); c4755fb9064f640 Thierry Reding 2017-11-13 1103 if (IS_ERR(wgrp->rst)) c4755fb9064f640 Thierry Reding 2017-11-13 1104 return PTR_ERR(wgrp->rst); c4755fb9064f640 Thierry Reding 2017-11-13 1105 c4755fb9064f640 Thierry Reding 2017-11-13 1106 err = reset_control_assert(wgrp->rst); c4755fb9064f640 Thierry Reding 2017-11-13 1107 if (err < 0) c4755fb9064f640 Thierry Reding 2017-11-13 1108 return err; c4755fb9064f640 Thierry Reding 2017-11-13 1109 } c4755fb9064f640 Thierry Reding 2017-11-13 1110 0cffbde2e318cc1 Thierry Reding 2018-11-29 1111 hub->num_heads = of_get_child_count(pdev->dev.of_node); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1112 0cffbde2e318cc1 Thierry Reding 2018-11-29 1113 hub->clk_heads = devm_kcalloc(&pdev->dev, hub->num_heads, sizeof(clk), 0cffbde2e318cc1 Thierry Reding 2018-11-29 1114 GFP_KERNEL); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1115 if (!hub->clk_heads) 0cffbde2e318cc1 Thierry Reding 2018-11-29 1116 return -ENOMEM; 0cffbde2e318cc1 Thierry Reding 2018-11-29 1117 0cffbde2e318cc1 Thierry Reding 2018-11-29 1118 for (i = 0; i < hub->num_heads; i++) { 0cffbde2e318cc1 Thierry Reding 2018-11-29 1119 child = of_get_next_child(pdev->dev.of_node, child); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1120 if (!child) { 0cffbde2e318cc1 Thierry Reding 2018-11-29 1121 dev_err(&pdev->dev, "failed to find node for head %u\n", 0cffbde2e318cc1 Thierry Reding 2018-11-29 1122 i); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1123 return -ENODEV; 0cffbde2e318cc1 Thierry Reding 2018-11-29 1124 } 0cffbde2e318cc1 Thierry Reding 2018-11-29 1125 0cffbde2e318cc1 Thierry Reding 2018-11-29 1126 clk = devm_get_clk_from_child(&pdev->dev, child, "dc"); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1127 if (IS_ERR(clk)) { 0cffbde2e318cc1 Thierry Reding 2018-11-29 1128 dev_err(&pdev->dev, "failed to get clock for head %u\n", 0cffbde2e318cc1 Thierry Reding 2018-11-29 1129 i); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1130 of_node_put(child); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1131 return PTR_ERR(clk); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1132 } 0cffbde2e318cc1 Thierry Reding 2018-11-29 1133 0cffbde2e318cc1 Thierry Reding 2018-11-29 1134 hub->clk_heads[i] = clk; 0cffbde2e318cc1 Thierry Reding 2018-11-29 1135 } 0cffbde2e318cc1 Thierry Reding 2018-11-29 1136 0cffbde2e318cc1 Thierry Reding 2018-11-29 1137 of_node_put(child); 0cffbde2e318cc1 Thierry Reding 2018-11-29 1138 c4755fb9064f640 Thierry Reding 2017-11-13 1139 /* XXX: enable clock across reset? */ c4755fb9064f640 Thierry Reding 2017-11-13 1140 err = reset_control_assert(hub->rst); c4755fb9064f640 Thierry Reding 2017-11-13 1141 if (err < 0) c4755fb9064f640 Thierry Reding 2017-11-13 1142 return err; c4755fb9064f640 Thierry Reding 2017-11-13 1143 c4755fb9064f640 Thierry Reding 2017-11-13 1144 platform_set_drvdata(pdev, hub); c4755fb9064f640 Thierry Reding 2017-11-13 1145 pm_runtime_enable(&pdev->dev); c4755fb9064f640 Thierry Reding 2017-11-13 1146 c4755fb9064f640 Thierry Reding 2017-11-13 1147 INIT_LIST_HEAD(&hub->client.list); c4755fb9064f640 Thierry Reding 2017-11-13 1148 hub->client.ops = &tegra_display_hub_ops; c4755fb9064f640 Thierry Reding 2017-11-13 1149 hub->client.dev = &pdev->dev; c4755fb9064f640 Thierry Reding 2017-11-13 1150 c4755fb9064f640 Thierry Reding 2017-11-13 1151 err = host1x_client_register(&hub->client); c4755fb9064f640 Thierry Reding 2017-11-13 1152 if (err < 0) c4755fb9064f640 Thierry Reding 2017-11-13 1153 dev_err(&pdev->dev, "failed to register host1x client: %d\n", c4755fb9064f640 Thierry Reding 2017-11-13 1154 err); c4755fb9064f640 Thierry Reding 2017-11-13 1155 a101e3dad8a90a8 Thierry Reding 2020-06-12 1156 err = devm_of_platform_populate(&pdev->dev); a101e3dad8a90a8 Thierry Reding 2020-06-12 1157 if (err < 0) a101e3dad8a90a8 Thierry Reding 2020-06-12 1158 goto unregister; a101e3dad8a90a8 Thierry Reding 2020-06-12 1159 a101e3dad8a90a8 Thierry Reding 2020-06-12 1160 return err; a101e3dad8a90a8 Thierry Reding 2020-06-12 1161 a101e3dad8a90a8 Thierry Reding 2020-06-12 1162 unregister: a101e3dad8a90a8 Thierry Reding 2020-06-12 1163 host1x_client_unregister(&hub->client); a101e3dad8a90a8 Thierry Reding 2020-06-12 1164 pm_runtime_disable(&pdev->dev); c4755fb9064f640 Thierry Reding 2017-11-13 1165 return err; c4755fb9064f640 Thierry Reding 2017-11-13 1166 } c4755fb9064f640 Thierry Reding 2017-11-13 1167 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On 10/12/2021 17:54, Robin Murphy wrote: > Complete the move into iommu-dma by refactoring the flush queues > themselves to belong to the DMA cookie rather than the IOVA domain. > > The refactoring may as well extend to some minor cosmetic aspects > too, to help us stay one step ahead of the style police. > > Signed-off-by: Robin Murphy<robin.murphy@arm.com> > --- Again, FWIW: Reviewed-by: John Garry <john.garry@huawei.com>
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ab8818965b2f..a7cd3a875481 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -9,9 +9,12 @@ */ #include <linux/acpi_iort.h> +#include <linux/atomic.h> +#include <linux/crash_dump.h> #include <linux/device.h> -#include <linux/dma-map-ops.h> +#include <linux/dma-direct.h> #include <linux/dma-iommu.h> +#include <linux/dma-map-ops.h> #include <linux/gfp.h> #include <linux/huge_mm.h> #include <linux/iommu.h> @@ -20,11 +23,10 @@ #include <linux/mm.h> #include <linux/mutex.h> #include <linux/pci.h> -#include <linux/swiotlb.h> #include <linux/scatterlist.h> +#include <linux/spinlock.h> +#include <linux/swiotlb.h> #include <linux/vmalloc.h> -#include <linux/crash_dump.h> -#include <linux/dma-direct.h> struct iommu_dma_msi_page { struct list_head list; @@ -41,7 +43,19 @@ struct iommu_dma_cookie { enum iommu_dma_cookie_type type; union { /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ - struct iova_domain iovad; + struct { + struct iova_domain iovad; + + struct iova_fq __percpu *fq; /* Flush queue */ + /* Number of TLB flushes that have been started */ + atomic64_t fq_flush_start_cnt; + /* Number of TLB flushes that have been finished */ + atomic64_t fq_flush_finish_cnt; + /* Timer to regularily empty the flush queues */ + struct timer_list fq_timer; + /* 1 when timer is active, 0 when not */ + atomic_t fq_timer_on; + }; /* Trivial linear page allocator for IOMMU_DMA_MSI_COOKIE */ dma_addr_t msi_iova; }; @@ -65,6 +79,27 @@ static int __init iommu_dma_forcedac_setup(char *str) early_param("iommu.forcedac", iommu_dma_forcedac_setup); +/* Number of entries per flush queue */ +#define IOVA_FQ_SIZE 256 + +/* Timeout (in ms) after which entries are flushed from the queue */ +#define IOVA_FQ_TIMEOUT 10 + +/* Flush queue entry for deferred flushing */ +struct iova_fq_entry { + unsigned long iova_pfn; + unsigned long pages; + struct list_head freelist; + u64 counter; /* Flush counter when this entry was added */ +}; + +/* Per-CPU flush queue structure */ +struct iova_fq { + struct iova_fq_entry entries[IOVA_FQ_SIZE]; + unsigned int head, tail; + spinlock_t lock; +}; + #define fq_ring_for_each(i, fq) \ for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % IOVA_FQ_SIZE) @@ -74,9 +109,9 @@ static inline bool fq_full(struct iova_fq *fq) return (((fq->tail + 1) % IOVA_FQ_SIZE) == fq->head); } -static inline unsigned fq_ring_add(struct iova_fq *fq) +static inline unsigned int fq_ring_add(struct iova_fq *fq) { - unsigned idx = fq->tail; + unsigned int idx = fq->tail; assert_spin_locked(&fq->lock); @@ -85,10 +120,10 @@ static inline unsigned fq_ring_add(struct iova_fq *fq) return idx; } -static void fq_ring_free(struct iova_domain *iovad, struct iova_fq *fq) +static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq) { - u64 counter = atomic64_read(&iovad->fq_flush_finish_cnt); - unsigned idx; + u64 counter = atomic64_read(&cookie->fq_flush_finish_cnt); + unsigned int idx; assert_spin_locked(&fq->lock); @@ -98,7 +133,7 @@ static void fq_ring_free(struct iova_domain *iovad, struct iova_fq *fq) break; put_pages_list(&fq->entries[idx].freelist); - free_iova_fast(iovad, + free_iova_fast(&cookie->iovad, fq->entries[idx].iova_pfn, fq->entries[idx].pages); @@ -106,50 +141,50 @@ static void fq_ring_free(struct iova_domain *iovad, struct iova_fq *fq) } } -static void iova_domain_flush(struct iova_domain *iovad) +static void fq_flush_iotlb(struct iommu_dma_cookie *cookie) { - atomic64_inc(&iovad->fq_flush_start_cnt); - iovad->fq_domain->ops->flush_iotlb_all(iovad->fq_domain); - atomic64_inc(&iovad->fq_flush_finish_cnt); + atomic64_inc(&cookie->fq_flush_start_cnt); + cookie->fq_domain->ops->flush_iotlb_all(cookie->fq_domain); + atomic64_inc(&cookie->fq_flush_finish_cnt); } static void fq_flush_timeout(struct timer_list *t) { - struct iova_domain *iovad = from_timer(iovad, t, fq_timer); + struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer); int cpu; - atomic_set(&iovad->fq_timer_on, 0); - iova_domain_flush(iovad); + atomic_set(&cookie->fq_timer_on, 0); + fq_flush_iotlb(cookie); for_each_possible_cpu(cpu) { unsigned long flags; struct iova_fq *fq; - fq = per_cpu_ptr(iovad->fq, cpu); + fq = per_cpu_ptr(cookie->fq, cpu); spin_lock_irqsave(&fq->lock, flags); - fq_ring_free(iovad, fq); + fq_ring_free(cookie, fq); spin_unlock_irqrestore(&fq->lock, flags); } } -void queue_iova(struct iova_domain *iovad, +static void queue_iova(struct iommu_dma_cookie *cookie, unsigned long pfn, unsigned long pages, struct list_head *freelist) { struct iova_fq *fq; unsigned long flags; - unsigned idx; + unsigned int idx; /* * Order against the IOMMU driver's pagetable update from unmapping - * @pte, to guarantee that iova_domain_flush() observes that if called + * @pte, to guarantee that fq_flush_iotlb() observes that if called * from a different CPU before we release the lock below. Full barrier * so it also pairs with iommu_dma_init_fq() to avoid seeing partially * written fq state here. */ smp_mb(); - fq = raw_cpu_ptr(iovad->fq); + fq = raw_cpu_ptr(cookie->fq); spin_lock_irqsave(&fq->lock, flags); /* @@ -157,65 +192,66 @@ void queue_iova(struct iova_domain *iovad, * flushed out on another CPU. This makes the fq_full() check below less * likely to be true. */ - fq_ring_free(iovad, fq); + fq_ring_free(cookie, fq); if (fq_full(fq)) { - iova_domain_flush(iovad); - fq_ring_free(iovad, fq); + fq_flush_iotlb(cookie); + fq_ring_free(cookie, fq); } idx = fq_ring_add(fq); fq->entries[idx].iova_pfn = pfn; fq->entries[idx].pages = pages; - fq->entries[idx].counter = atomic64_read(&iovad->fq_flush_start_cnt); + fq->entries[idx].counter = atomic64_read(&cookie->fq_flush_start_cnt); list_splice(freelist, &fq->entries[idx].freelist); spin_unlock_irqrestore(&fq->lock, flags); /* Avoid false sharing as much as possible. */ - if (!atomic_read(&iovad->fq_timer_on) && - !atomic_xchg(&iovad->fq_timer_on, 1)) - mod_timer(&iovad->fq_timer, + if (!atomic_read(&cookie->fq_timer_on) && + !atomic_xchg(&cookie->fq_timer_on, 1)) + mod_timer(&cookie->fq_timer, jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT)); } -static void free_iova_flush_queue(struct iova_domain *iovad) +static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie) { int cpu, idx; - if (!iovad->fq) + if (!cookie->fq) return; - del_timer_sync(&iovad->fq_timer); - /* - * This code runs when the iova_domain is being detroyed, so don't - * bother to free iovas, just free any remaining pagetable pages. - */ + del_timer_sync(&cookie->fq_timer); + /* The IOVAs will be torn down separately, so just free our queued pages */ for_each_possible_cpu(cpu) { - struct iova_fq *fq = per_cpu_ptr(iovad->fq, cpu); + struct iova_fq *fq = per_cpu_ptr(cookie->fq, cpu); fq_ring_for_each(idx, fq) put_pages_list(&fq->entries[idx].freelist); } - free_percpu(iovad->fq); - - iovad->fq = NULL; - iovad->fq_domain = NULL; + free_percpu(cookie->fq); } -int init_iova_flush_queue(struct iova_domain *iovad, struct iommu_domain *fq_domain) +/* sysfs updates are serialised by the mutex of the group owning @domain */ +int iommu_dma_init_fq(struct iommu_domain *domain) { + struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_fq __percpu *queue; int i, cpu; - atomic64_set(&iovad->fq_flush_start_cnt, 0); - atomic64_set(&iovad->fq_flush_finish_cnt, 0); + if (cookie->fq_domain) + return 0; + + atomic64_set(&cookie->fq_flush_start_cnt, 0); + atomic64_set(&cookie->fq_flush_finish_cnt, 0); queue = alloc_percpu(struct iova_fq); - if (!queue) + if (!queue) { + pr_warn("iova flush queue initialization failed\n"); return -ENOMEM; + } for_each_possible_cpu(cpu) { struct iova_fq *fq = per_cpu_ptr(queue, cpu); @@ -229,12 +265,16 @@ int init_iova_flush_queue(struct iova_domain *iovad, struct iommu_domain *fq_dom INIT_LIST_HEAD(&fq->entries[i].freelist); } - iovad->fq_domain = fq_domain; - iovad->fq = queue; - - timer_setup(&iovad->fq_timer, fq_flush_timeout, 0); - atomic_set(&iovad->fq_timer_on, 0); + cookie->fq = queue; + timer_setup(&cookie->fq_timer, fq_flush_timeout, 0); + atomic_set(&cookie->fq_timer_on, 0); + /* + * Prevent incomplete fq state being observable. Pairs with path from + * __iommu_dma_unmap() through iommu_dma_free_iova() to queue_iova() + */ + smp_wmb(); + WRITE_ONCE(cookie->fq_domain, domain); return 0; } @@ -320,7 +360,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) return; if (cookie->type == IOMMU_DMA_IOVA_COOKIE && cookie->iovad.granule) { - free_iova_flush_queue(&cookie->iovad); + iommu_dma_free_fq(cookie); put_iova_domain(&cookie->iovad); } @@ -469,29 +509,6 @@ static bool dev_use_swiotlb(struct device *dev) return IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev); } -/* sysfs updates are serialised by the mutex of the group owning @domain */ -int iommu_dma_init_fq(struct iommu_domain *domain) -{ - struct iommu_dma_cookie *cookie = domain->iova_cookie; - int ret; - - if (cookie->fq_domain) - return 0; - - ret = init_iova_flush_queue(&cookie->iovad, domain); - if (ret) { - pr_warn("iova flush queue initialization failed\n"); - return ret; - } - /* - * Prevent incomplete iovad->fq being observable. Pairs with path from - * __iommu_dma_unmap() through iommu_dma_free_iova() to queue_iova() - */ - smp_wmb(); - WRITE_ONCE(cookie->fq_domain, domain); - return 0; -} - /** * iommu_dma_init_domain - Initialise a DMA mapping domain * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() @@ -630,7 +647,7 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, if (cookie->type == IOMMU_DMA_MSI_COOKIE) cookie->msi_iova -= size; else if (gather && gather->queued) - queue_iova(iovad, iova_pfn(iovad, iova), + queue_iova(cookie, iova_pfn(iovad, iova), size >> iova_shift(iovad), &gather->freelist); else diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 6673dfa8e7c5..72ac25831584 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -61,8 +61,6 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, iovad->start_pfn = start_pfn; iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad)); iovad->max32_alloc_size = iovad->dma_32bit_pfn; - iovad->fq_domain = NULL; - iovad->fq = NULL; iovad->anchor.pfn_lo = iovad->anchor.pfn_hi = IOVA_ANCHOR; rb_link_node(&iovad->anchor.node, NULL, &iovad->rbroot.rb_node); rb_insert_color(&iovad->anchor.node, &iovad->rbroot); diff --git a/include/linux/iova.h b/include/linux/iova.h index 072a09c06e8a..0abd48c5e622 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -12,9 +12,6 @@ #include <linux/types.h> #include <linux/kernel.h> #include <linux/rbtree.h> -#include <linux/atomic.h> -#include <linux/dma-mapping.h> -#include <linux/iommu.h> /* iova structure */ struct iova { @@ -36,27 +33,6 @@ struct iova_rcache { struct iova_cpu_rcache __percpu *cpu_rcaches; }; -/* Number of entries per Flush Queue */ -#define IOVA_FQ_SIZE 256 - -/* Timeout (in ms) after which entries are flushed from the Flush-Queue */ -#define IOVA_FQ_TIMEOUT 10 - -/* Flush Queue entry for defered flushing */ -struct iova_fq_entry { - unsigned long iova_pfn; - unsigned long pages; - struct list_head freelist; - u64 counter; /* Flush counter when this entrie was added */ -}; - -/* Per-CPU Flush Queue structure */ -struct iova_fq { - struct iova_fq_entry entries[IOVA_FQ_SIZE]; - unsigned head, tail; - spinlock_t lock; -}; - /* holds all the iova translations for a domain */ struct iova_domain { spinlock_t iova_rbtree_lock; /* Lock to protect update of rbtree */ @@ -67,23 +43,9 @@ struct iova_domain { unsigned long start_pfn; /* Lower limit for this domain */ unsigned long dma_32bit_pfn; unsigned long max32_alloc_size; /* Size of last failed allocation */ - struct iova_fq __percpu *fq; /* Flush Queue */ - - atomic64_t fq_flush_start_cnt; /* Number of TLB flushes that - have been started */ - - atomic64_t fq_flush_finish_cnt; /* Number of TLB flushes that - have been finished */ - struct iova anchor; /* rbtree lookup anchor */ + struct iova_rcache rcaches[IOVA_RANGE_CACHE_MAX_SIZE]; /* IOVA range caches */ - - struct iommu_domain *fq_domain; - - struct timer_list fq_timer; /* Timer to regularily empty the - flush-queues */ - atomic_t fq_timer_on; /* 1 when timer is active, 0 - when not */ struct hlist_node cpuhp_dead; }; @@ -133,16 +95,12 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size, bool size_aligned); void free_iova_fast(struct iova_domain *iovad, unsigned long pfn, unsigned long size); -void queue_iova(struct iova_domain *iovad, - unsigned long pfn, unsigned long pages, - struct list_head *freelist); unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, bool flush_rcache); struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi); void init_iova_domain(struct iova_domain *iovad, unsigned long granule, unsigned long start_pfn); -int init_iova_flush_queue(struct iova_domain *iovad, struct iommu_domain *fq_domain); struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); void put_iova_domain(struct iova_domain *iovad); #else
Complete the move into iommu-dma by refactoring the flush queues themselves to belong to the DMA cookie rather than the IOVA domain. The refactoring may as well extend to some minor cosmetic aspects too, to help us stay one step ahead of the style police. Signed-off-by: Robin Murphy <robin.murphy@arm.com> --- v2: Rebase with del_timer_sync() change drivers/iommu/dma-iommu.c | 171 +++++++++++++++++++++----------------- drivers/iommu/iova.c | 2 - include/linux/iova.h | 44 +--------- 3 files changed, 95 insertions(+), 122 deletions(-)