From patchwork Wed Feb 17 01:31:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 8333471 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1387A9F38B for ; Wed, 17 Feb 2016 01:33:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E84BD202F8 for ; Wed, 17 Feb 2016 01:33:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A1BEE202F0 for ; Wed, 17 Feb 2016 01:33:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964921AbcBQBcq (ORCPT ); Tue, 16 Feb 2016 20:32:46 -0500 Received: from g2t4618.austin.hp.com ([15.73.212.83]:50395 "EHLO g2t4618.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933356AbcBQBbu (ORCPT ); Tue, 16 Feb 2016 20:31:50 -0500 Received: from g2t4688.austin.hpicorp.net (g2t4688.austin.hpicorp.net [15.94.10.174]) by g2t4618.austin.hp.com (Postfix) with ESMTP id 3377170; Wed, 17 Feb 2016 01:31:49 +0000 (UTC) Received: from RHEL65.localdomain (unknown [16.214.219.240]) by g2t4688.austin.hpicorp.net (Postfix) with ESMTP id 9B67E5E; Wed, 17 Feb 2016 01:31:47 +0000 (UTC) From: Waiman Long To: Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andi Kleen , Dave Chinner , Scott J Norton , Douglas Hatch , Waiman Long Subject: [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks Date: Tue, 16 Feb 2016 20:31:19 -0500 Message-Id: <1455672680-7153-2-git-send-email-Waiman.Long@hpe.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1455672680-7153-1-git-send-email-Waiman.Long@hpe.com> References: <1455672680-7153-1-git-send-email-Waiman.Long@hpe.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Linked list is used everywhere in the Linux kernel. However, if many threads are trying to add or delete entries into the same linked list, it can create a performance bottleneck. This patch introduces a new per-cpu list subystem with associated per-cpu locks for protecting each of the lists individually. This allows list entries insertion and deletion operations to happen in parallel instead of being serialized with a global list and lock. List entry insertion is strictly per cpu. List deletion, however, can happen in a cpu other than the one that did the insertion. So we still need lock to protect the list. Because of that, there may still be a small amount of contention when deletion is being done. A new header file include/linux/percpu-list.h will be added with the associated percpu_list structure. The following functions are used to manage the per-cpu list: 1. int init_percpu_list_head(struct percpu_list **pclist_handle) 2. void percpu_list_add(struct percpu_list *new, struct percpu_list *head) 3. void percpu_list_del(struct percpu_list *entry) Signed-off-by: Waiman Long --- include/linux/percpu-list.h | 117 +++++++++++++++++++++++++++++++++++++++++++ lib/Makefile | 2 +- lib/percpu-list.c | 80 +++++++++++++++++++++++++++++ 3 files changed, 198 insertions(+), 1 deletions(-) create mode 100644 include/linux/percpu-list.h create mode 100644 lib/percpu-list.c diff --git a/include/linux/percpu-list.h b/include/linux/percpu-list.h new file mode 100644 index 0000000..94be520 --- /dev/null +++ b/include/linux/percpu-list.h @@ -0,0 +1,117 @@ +/* + * Per-cpu list + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP + * + * Authors: Waiman Long + */ +#ifndef __LINUX_PERCPU_LIST_H +#define __LINUX_PERCPU_LIST_H + +#include +#include +#include + +/* + * include/linux/percpu-list.h + * + * A per-cpu list protected by a per-cpu spinlock. + * + * The list head percpu_list structure contains the spinlock, the other + * entries in the list contain the spinlock pointer. + */ +struct percpu_list { + struct list_head list; + union { + spinlock_t lock; /* For list head */ + spinlock_t *lockptr; /* For other entries */ + }; +}; + +/* + * A simplified for_all_percpu_list_entries macro without the next and pchead + * parameters. + */ +#define for_all_percpu_list_entries_simple(pos, pclock, head, member) \ + for_all_percpu_list_entries(pos, next, pchead, pclock, head, member) + +#define PERCPU_LIST_HEAD_INIT(name) \ + { \ + .list.prev = &name.list, \ + .list.next = &name.list, \ + .list.lock = __SPIN_LOCK_UNLOCKED(name), \ + } + +#define PERCPU_LIST_ENTRY_INIT(name) \ + { \ + .list.prev = &name.list, \ + .list.next = &name.list, \ + .list.lockptr = NULL \ + } + +static inline void INIT_PERCPU_LIST_HEAD(struct percpu_list *pcpu_list) +{ + INIT_LIST_HEAD(&pcpu_list->list); + pcpu_list->lock = __SPIN_LOCK_UNLOCKED(&pcpu_list->lock); +} + +static inline void INIT_PERCPU_LIST_ENTRY(struct percpu_list *pcpu_list) +{ + INIT_LIST_HEAD(&pcpu_list->list); + pcpu_list->lockptr = NULL; +} + +#define PERCPU_LIST_HEAD(name) struct percpu_list __percpu *name + +static inline void free_percpu_list_head(struct percpu_list **pclist_handle) +{ + free_percpu(*pclist_handle); + *pclist_handle = NULL; +} + +static inline bool percpu_list_empty(struct percpu_list *pcpu_list) +{ + int cpu; + + for_each_possible_cpu (cpu) + if (!list_empty(&per_cpu_ptr(pcpu_list, cpu)->list)) + return false; + return true; +} + +/** + * for_all_percpu_list_entries - iterate over all the per-cpu list with locking + * @pos: the type * to use as a loop cursor for the current . + * @next: an internal type * variable pointing to the next entry + * @pchead: an internal struct list * of percpu list head + * @pclock: an internal variable for the current per-cpu spinlock + * @head: the head of the per-cpu list + * @member: the name of the per-cpu list within the struct + */ +#define for_all_percpu_list_entries(pos, next, pchead, pclock, head, member)\ + { \ + int cpu; \ + for_each_possible_cpu (cpu) { \ + typeof(*pos) *next; \ + spinlock_t *pclock = per_cpu_ptr(&(head)->lock, cpu); \ + struct list_head *pchead = &per_cpu_ptr(head, cpu)->list;\ + spin_lock(pclock); \ + list_for_each_entry_safe(pos, next, pchead, member.list) + +#define end_all_percpu_list_entries(pclock) spin_unlock(pclock); } } + +extern int init_percpu_list_head(struct percpu_list **pclist_handle); +extern void percpu_list_add(struct percpu_list *new, struct percpu_list *head); +extern void percpu_list_del(struct percpu_list *entry); + +#endif /* __LINUX_PERCPU_LIST_H */ diff --git a/lib/Makefile b/lib/Makefile index a7c26a4..71a25d4 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -27,7 +27,7 @@ obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \ gcd.o lcm.o list_sort.o uuid.o flex_array.o iov_iter.o clz_ctz.o \ bsearch.o find_bit.o llist.o memweight.o kfifo.o \ percpu-refcount.o percpu_ida.o rhashtable.o reciprocal_div.o \ - once.o + once.o percpu-list.o obj-y += string_helpers.o obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o obj-y += hexdump.o diff --git a/lib/percpu-list.c b/lib/percpu-list.c new file mode 100644 index 0000000..e5c04bf --- /dev/null +++ b/lib/percpu-list.c @@ -0,0 +1,80 @@ +/* + * Per-cpu list + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP + * + * Authors: Waiman Long + */ +#include + +/* + * Initialize the per-cpu list + */ +int init_percpu_list_head(struct percpu_list **pclist_handle) +{ + struct percpu_list *pclist = alloc_percpu(struct percpu_list); + int cpu; + + if (!pclist) + return -ENOMEM; + + for_each_possible_cpu (cpu) + INIT_PERCPU_LIST_HEAD(per_cpu_ptr(pclist, cpu)); + + *pclist_handle = pclist; + return 0; +} + +/* + * List selection is based on the CPU being used when the percpu_list_add() + * function is called. However, deletion may be done by a different CPU. + * So we still need to use a lock to protect the content of the list. + */ +void percpu_list_add(struct percpu_list *new, struct percpu_list *head) +{ + spinlock_t *lock; + + /* + * We need to disable preemption before accessing the per-cpu data + * to make sure that the cpu won't be changed because of preemption. + */ + preempt_disable(); + lock = this_cpu_ptr(&head->lock); + spin_lock(lock); + new->lockptr = lock; + list_add(&new->list, this_cpu_ptr(&head->list)); + spin_unlock(lock); + preempt_enable(); +} + +/* + * Delete an entry from a percpu list + * + * We need to check the lock pointer again after taking the lock to guard + * against concurrent delete of the same entry. If the lock pointer changes + * or becomes NULL, we assume that the deletion was done elsewhere. + */ +void percpu_list_del(struct percpu_list *entry) +{ + spinlock_t *lock = READ_ONCE(entry->lockptr); + + if (unlikely(!lock)) + return; + + spin_lock(lock); + if (likely(entry->lockptr && (lock == entry->lockptr))) { + list_del_init(&entry->list); + entry->lockptr = NULL; + } + spin_unlock(lock); +}