Message ID | 1470761563-25173-6-git-send-email-Waiman.Long@hpe.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Waiman, [auto build test ERROR on linus/master] [also build test ERROR on v4.8-rc1 next-20160809] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Waiman-Long/vfs-Use-dlock-list-for-SB-s-s_inodes-list/20160810-012457 config: i386-randconfig-s0-201632 (attached as .config) compiler: gcc-6 (Debian 6.1.1-9) 6.1.1 20160705 reproduce: # save the attached .config to linux build tree make ARCH=i386 All error/warnings (new ones prefixed by >>): lib/dlock-list.c: In function 'cpu2list_init': >> lib/dlock-list.c:60:15: error: assignment to expression with array type sibling_mask = topology_sibling_cpumask(0); ^ >> lib/dlock-list.c:61:6: warning: the address of 'sibling_mask' will always evaluate as 'true' [-Waddress] if (!sibling_mask) ^ lib/dlock-list.c:74:16: error: assignment to expression with array type sibling_mask = topology_sibling_cpumask(cpu); ^ vim +60 lib/dlock-list.c 54 cpumask_var_t sibling_mask; 55 static struct cpumask mask __initdata; 56 57 /* 58 * Check # of sibling CPUs for CPU 0 59 */ > 60 sibling_mask = topology_sibling_cpumask(0); > 61 if (!sibling_mask) 62 goto done; 63 nr_siblings = cpumask_weight(sibling_mask); 64 if (nr_siblings == 1) --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation
Hi Waiman, [auto build test ERROR on linus/master] [also build test ERROR on v4.8-rc1 next-20160809] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Waiman-Long/vfs-Use-dlock-list-for-SB-s-s_inodes-list/20160810-012457 config: x86_64-randconfig-s2-08100126 (attached as .config) compiler: gcc-4.4 (Debian 4.4.7-8) 4.4.7 reproduce: # save the attached .config to linux build tree make ARCH=x86_64 All errors (new ones prefixed by >>): lib/dlock-list.c: In function 'cpu2list_init': >> lib/dlock-list.c:60: error: incompatible types when assigning to type 'cpumask_var_t' from type 'struct cpumask *' lib/dlock-list.c:61: warning: the address of 'sibling_mask' will always evaluate as 'true' lib/dlock-list.c:74: error: incompatible types when assigning to type 'cpumask_var_t' from type 'struct cpumask *' vim +60 lib/dlock-list.c 54 cpumask_var_t sibling_mask; 55 static struct cpumask mask __initdata; 56 57 /* 58 * Check # of sibling CPUs for CPU 0 59 */ > 60 sibling_mask = topology_sibling_cpumask(0); 61 if (!sibling_mask) 62 goto done; 63 nr_siblings = cpumask_weight(sibling_mask); --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation
Hi Waiman, [auto build test ERROR on linus/master] [also build test ERROR on v4.8-rc1 next-20160809] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Waiman-Long/vfs-Use-dlock-list-for-SB-s-s_inodes-list/20160810-012457 config: x86_64-randconfig-s0-08100126 (attached as .config) compiler: gcc-4.4 (Debian 4.4.7-8) 4.4.7 reproduce: # save the attached .config to linux build tree make ARCH=x86_64 All errors (new ones prefixed by >>): lib/dlock-list.c: In function 'cpu2list_init': >> lib/dlock-list.c:60: error: incompatible types when assigning to type 'cpumask_var_t' from type 'const struct cpumask *' lib/dlock-list.c:61: warning: the address of 'sibling_mask' will always evaluate as 'true' lib/dlock-list.c:74: error: incompatible types when assigning to type 'cpumask_var_t' from type 'const struct cpumask *' vim +60 lib/dlock-list.c 54 cpumask_var_t sibling_mask; 55 static struct cpumask mask __initdata; 56 57 /* 58 * Check # of sibling CPUs for CPU 0 59 */ > 60 sibling_mask = topology_sibling_cpumask(0); 61 if (!sibling_mask) 62 goto done; 63 nr_siblings = cpumask_weight(sibling_mask); --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation
On 08/09/2016 02:23 PM, kbuild test robot wrote: > Hi Waiman, > > [auto build test ERROR on linus/master] > [also build test ERROR on v4.8-rc1 next-20160809] > [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] > > url: https://github.com/0day-ci/linux/commits/Waiman-Long/vfs-Use-dlock-list-for-SB-s-s_inodes-list/20160810-012457 > config: i386-randconfig-s0-201632 (attached as .config) > compiler: gcc-6 (Debian 6.1.1-9) 6.1.1 20160705 > reproduce: > # save the attached .config to linux build tree > make ARCH=i386 > > All error/warnings (new ones prefixed by>>): > > lib/dlock-list.c: In function 'cpu2list_init': >>> lib/dlock-list.c:60:15: error: assignment to expression with array type > sibling_mask = topology_sibling_cpumask(0); > ^ >>> lib/dlock-list.c:61:6: warning: the address of 'sibling_mask' will always evaluate as 'true' [-Waddress] > if (!sibling_mask) > ^ > lib/dlock-list.c:74:16: error: assignment to expression with array type > sibling_mask = topology_sibling_cpumask(cpu); > ^ > > vim +60 lib/dlock-list.c > > 54 cpumask_var_t sibling_mask; > 55 static struct cpumask mask __initdata; > 56 > 57 /* > 58 * Check # of sibling CPUs for CPU 0 > 59 */ > > 60 sibling_mask = topology_sibling_cpumask(0); > > 61 if (!sibling_mask) > 62 goto done; > 63 nr_siblings = cpumask_weight(sibling_mask); > 64 if (nr_siblings == 1) > > --- > 0-DAY kernel test infrastructure Open Source Technology Center > https://lists.01.org/pipermail/kbuild-all Intel Corporation I have updated and re-sent patch 5 that should fix the non-SMP build error. Cheers, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/include/linux/dlock-list.h b/include/linux/dlock-list.h index 19f897e..f5d4251 100644 --- a/include/linux/dlock-list.h +++ b/include/linux/dlock-list.h @@ -39,6 +39,7 @@ struct dlock_list_head { struct dlock_list_heads { struct dlock_list_head *heads; + int nhead; }; /* @@ -58,6 +59,7 @@ struct dlock_list_node { */ struct dlock_list_iter { int index; + int nhead; struct dlock_list_head *head, *entry; }; @@ -65,6 +67,7 @@ struct dlock_list_iter { { \ .index = -1, \ .head = (dlist)->heads, \ + .nhead = (dlist)->nhead, \ } #define DEFINE_DLOCK_LIST_ITER(s, heads) \ diff --git a/lib/dlock-list.c b/lib/dlock-list.c index 9d4950c..b8b3524 100644 --- a/lib/dlock-list.c +++ b/lib/dlock-list.c @@ -28,6 +28,71 @@ */ static struct lock_class_key dlock_list_key; +/* + * Mapping CPU number to dlock list index. + */ +static DEFINE_PER_CPU_READ_MOSTLY(int, cpu2list); +static int nr_dlists; + +/* + * Initialize cpu2list mapping table & nr_dlists; + * + * All the sibling CPUs of a sibling group will map to the same dlock list so + * as to reduce the number of dlock lists to be maintained while minimizing + * cacheline contention. + * + * As the sibling masks are set up in the core initcall phase, this function + * has to be done in the postcore phase to get the right data. An alloc can + * be called before init. In this case, we just do a simple 1-1 mapping + * between CPU and dlock list head. After init, multiple CPUs may map to the + * same dlock list head. + */ +static int __init cpu2list_init(void) +{ + int idx, cpu; + int nr_siblings; + cpumask_var_t sibling_mask; + static struct cpumask mask __initdata; + + /* + * Check # of sibling CPUs for CPU 0 + */ + sibling_mask = topology_sibling_cpumask(0); + if (!sibling_mask) + goto done; + nr_siblings = cpumask_weight(sibling_mask); + if (nr_siblings == 1) + goto done; + + cpumask_setall(&mask); + idx = 0; + for_each_possible_cpu(cpu) { + int scpu; + + if (!cpumask_test_cpu(cpu, &mask)) + continue; + sibling_mask = topology_sibling_cpumask(cpu); + for_each_cpu(scpu, sibling_mask) { + per_cpu(cpu2list, scpu) = idx; + cpumask_clear_cpu(scpu, &mask); + } + idx++; + } + + /* + * nr_dlists can only be set after cpu2list_map is properly initialized. + */ + smp_mb(); + nr_dlists = nr_cpu_ids/nr_siblings; + + WARN_ON(cpumask_weight(&mask) != 0); + WARN_ON(idx > nr_dlists); + pr_info("dlock-list: %d head entries per dlock list.\n", nr_dlists); +done: + return 0; +} +postcore_initcall(cpu2list_init); + /** * alloc_dlock_list_heads - Initialize and allocate the list of head entries * @dlist: Pointer to the dlock_list_heads structure to be initialized @@ -42,13 +107,14 @@ int alloc_dlock_list_heads(struct dlock_list_heads *dlist) { int idx; - dlist->heads = kcalloc(nr_cpu_ids, sizeof(struct dlock_list_head), + dlist->nhead = nr_dlists ? nr_dlists : nr_cpu_ids; + dlist->heads = kcalloc(dlist->nhead, sizeof(struct dlock_list_head), GFP_KERNEL); if (!dlist->heads) return -ENOMEM; - for (idx = 0; idx < nr_cpu_ids; idx++) { + for (idx = 0; idx < dlist->nhead; idx++) { struct dlock_list_head *head = &dlist->heads[idx]; INIT_LIST_HEAD(&head->list); @@ -69,6 +135,7 @@ void free_dlock_list_heads(struct dlock_list_heads *dlist) { kfree(dlist->heads); dlist->heads = NULL; + dlist->nhead = 0; } /** @@ -84,7 +151,7 @@ bool dlock_list_empty(struct dlock_list_heads *dlist) { int idx; - for (idx = 0; idx < nr_cpu_ids; idx++) + for (idx = 0; idx < dlist->nhead; idx++) if (!list_empty(&dlist->heads[idx].list)) return false; return true; @@ -102,7 +169,9 @@ bool dlock_list_empty(struct dlock_list_heads *dlist) void dlock_list_add(struct dlock_list_node *node, struct dlock_list_heads *dlist) { - struct dlock_list_head *head = &dlist->heads[smp_processor_id()]; + int cpu = smp_processor_id(); + int idx = (dlist->nhead < nr_cpu_ids) ? per_cpu(cpu2list, cpu) : cpu; + struct dlock_list_head *head = &dlist->heads[idx]; /* * There is no need to disable preemption @@ -175,7 +244,7 @@ next_list: /* * Try next list */ - if (++iter->index >= nr_cpu_ids) + if (++iter->index >= iter->nhead) return NULL; /* All the entries iterated */ if (list_empty(&iter->head[iter->index].list))
The dlock list needs one list for each of the CPUs available. However, for sibling CPUs, they are sharing the L2 and probably L1 caches too. As a result, there is not much to gain in term of avoiding cacheline contention while increasing the cacheline footprint of the L1/L2 caches as separate lists may need to be in the cache. This patch makes all the sibling CPUs share the same list, thus reducing the number of lists that need to be maintained in each dlock list without having any noticeable impact on performance. It also improves dlock list iteration performance as fewer lists need to be iterated. Signed-off-by: Waiman Long <Waiman.Long@hpe.com> --- include/linux/dlock-list.h | 3 ++ lib/dlock-list.c | 79 +++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 77 insertions(+), 5 deletions(-)