From patchwork Mon Nov 28 15:36:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13057717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6845DC43217 for ; Mon, 28 Nov 2022 15:36:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231225AbiK1Pgj (ORCPT ); Mon, 28 Nov 2022 10:36:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230273AbiK1Pgj (ORCPT ); Mon, 28 Nov 2022 10:36:39 -0500 Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [IPv6:2a00:1450:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8E7912742; Mon, 28 Nov 2022 07:36:37 -0800 (PST) Received: by mail-ej1-x633.google.com with SMTP id n21so26773456ejb.9; Mon, 28 Nov 2022 07:36:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=g+N5t+TQRWOa+FvxcRyEwpRSF6j0F4p7hVisx0I/0vU=; b=POaJRadF6aeO7wjkOM9EcWGRx85HP+2pKZX+5LbSwt/o97tJlXKBib1LpfbhAe2v/z 551bUBk0/o8Hpk1ikEMbPBxEWt9yd9UjxCK+u2Pkf6h2c3Da3dJXUOCzXELVMOBjOFFn D5qhMK2W/w9Oyt6zK4EnRKPSyvum5eM/Gzugz94xWeyT/a+a3JxiCSDVV0TMqVORxM29 m4mifSFR4r4MPJrkOCyGZok76UgPAky5ShdeV7vlVe/uSdojDyX4rFCvTf+X0f/XoWI8 pXzorgb/ioHNlJDwvyax8Q92zTI5su6VzbSxKmhMBwEAIitfp8LwssgmjzJcKqDE9fhV +iOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=g+N5t+TQRWOa+FvxcRyEwpRSF6j0F4p7hVisx0I/0vU=; b=t4+wAoiw5Cd67dXu1d8TaUFX632I43H5ks3MQcDlhwyz2ZgoS9c0vyiGJksgUZO1rV UKbtERN47bZXVYW061UvsBVOKdoM0FwQWl9MgPop8F4shT039yZOPwmeHfck5IJofN7V wgZIvpj2InNumJl1kd/enZSuXNGLRLkxngegIUUbSQmDXfCga5fcsACj2OJu6F4N406Y Uak4aJOblfBmDTyF4+jmTvmYwWjMja8U9G5D68ALuOyXm8dkywXKpUNzvRliI+b6ysV/ uNe3MHFCLpw0CtlK1g35K4EGlEanNWm+Y7ffAgoOFn3Rvi2GRZQHdzotMnekeLM8tj8f NJVg== X-Gm-Message-State: ANoB5pmo+WgB1G1wIohjTq9W4PPRTgDzisR5SEfh6WyxY628/ca8Y6H6 s+h4PHaJezHvRoLFaznoEevPEnkVkNg= X-Google-Smtp-Source: AA0mqf42+FpXHJwlZXQOtXuOFsxcpVyyB5cLYq4RS3K/Vx/xqthEoHntd/qiXCwteMRHVdO+5vGALg== X-Received: by 2002:a17:906:9618:b0:7ae:38a:fd with SMTP id s24-20020a170906961800b007ae038a00fdmr44323165ejx.501.1669649796197; Mon, 28 Nov 2022 07:36:36 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id m7-20020aa7d347000000b00459ad800bbcsm5303306edr.33.2022.11.28.07.36.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Nov 2022 07:36:35 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: LKML , RCU , "Paul E . McKenney" Cc: Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 1/4] rcu/kvfree: Switch to a generic linked list API Date: Mon, 28 Nov 2022 16:36:25 +0100 Message-Id: <20221128153628.541361-1-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org To make a code more readable and less confusing switch to a standard circular double linked list API. It allows to simplify the code since basic list operations are well defined and documented. Please note, this patch does not introduce any functional change it is only limited by refactoring of code. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 89 +++++++++++++++++++++++------------------------ 1 file changed, 43 insertions(+), 46 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 76973d716921..74d6889dcc50 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2900,13 +2900,13 @@ EXPORT_SYMBOL_GPL(call_rcu); /** * struct kvfree_rcu_bulk_data - single block to store kvfree_rcu() pointers + * @list: List node. All blocks are linked between each other * @nr_records: Number of active pointers in the array - * @next: Next bulk object in the block chain * @records: Array of the kvfree_rcu() pointers */ struct kvfree_rcu_bulk_data { + struct list_head list; unsigned long nr_records; - struct kvfree_rcu_bulk_data *next; void *records[]; }; @@ -2922,21 +2922,21 @@ struct kvfree_rcu_bulk_data { * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period * @head_free: List of kfree_rcu() objects waiting for a grace period - * @bkvhead_free: Bulk-List of kvfree_rcu() objects waiting for a grace period + * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period * @krcp: Pointer to @kfree_rcu_cpu structure */ struct kfree_rcu_cpu_work { struct rcu_work rcu_work; struct rcu_head *head_free; - struct kvfree_rcu_bulk_data *bkvhead_free[FREE_N_CHANNELS]; + struct list_head bulk_head_free[FREE_N_CHANNELS]; struct kfree_rcu_cpu *krcp; }; /** * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period * @head: List of kfree_rcu() objects not yet waiting for a grace period - * @bkvhead: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period + * @bulk_head: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period * @lock: Synchronize access to this structure * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES @@ -2960,7 +2960,7 @@ struct kfree_rcu_cpu_work { */ struct kfree_rcu_cpu { struct rcu_head *head; - struct kvfree_rcu_bulk_data *bkvhead[FREE_N_CHANNELS]; + struct list_head bulk_head[FREE_N_CHANNELS]; struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; raw_spinlock_t lock; struct delayed_work monitor_work; @@ -3055,12 +3055,13 @@ drain_page_cache(struct kfree_rcu_cpu *krcp) /* * This function is invoked in workqueue context after a grace period. - * It frees all the objects queued on ->bkvhead_free or ->head_free. + * It frees all the objects queued on ->bulk_head_free or ->head_free. */ static void kfree_rcu_work(struct work_struct *work) { unsigned long flags; - struct kvfree_rcu_bulk_data *bkvhead[FREE_N_CHANNELS], *bnext; + struct kvfree_rcu_bulk_data *bnode, *n; + struct list_head bulk_head[FREE_N_CHANNELS]; struct rcu_head *head, *next; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; @@ -3072,10 +3073,8 @@ static void kfree_rcu_work(struct work_struct *work) raw_spin_lock_irqsave(&krcp->lock, flags); // Channels 1 and 2. - for (i = 0; i < FREE_N_CHANNELS; i++) { - bkvhead[i] = krwp->bkvhead_free[i]; - krwp->bkvhead_free[i] = NULL; - } + for (i = 0; i < FREE_N_CHANNELS; i++) + list_replace_init(&krwp->bulk_head_free[i], &bulk_head[i]); // Channel 3. head = krwp->head_free; @@ -3084,36 +3083,33 @@ static void kfree_rcu_work(struct work_struct *work) // Handle the first two channels. for (i = 0; i < FREE_N_CHANNELS; i++) { - for (; bkvhead[i]; bkvhead[i] = bnext) { - bnext = bkvhead[i]->next; - debug_rcu_bhead_unqueue(bkvhead[i]); + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) { + debug_rcu_bhead_unqueue(bnode); rcu_lock_acquire(&rcu_callback_map); if (i == 0) { // kmalloc() / kfree(). trace_rcu_invoke_kfree_bulk_callback( - rcu_state.name, bkvhead[i]->nr_records, - bkvhead[i]->records); + rcu_state.name, bnode->nr_records, + bnode->records); - kfree_bulk(bkvhead[i]->nr_records, - bkvhead[i]->records); + kfree_bulk(bnode->nr_records, bnode->records); } else { // vmalloc() / vfree(). - for (j = 0; j < bkvhead[i]->nr_records; j++) { + for (j = 0; j < bnode->nr_records; j++) { trace_rcu_invoke_kvfree_callback( - rcu_state.name, - bkvhead[i]->records[j], 0); + rcu_state.name, bnode->records[j], 0); - vfree(bkvhead[i]->records[j]); + vfree(bnode->records[j]); } } rcu_lock_release(&rcu_callback_map); raw_spin_lock_irqsave(&krcp->lock, flags); - if (put_cached_bnode(krcp, bkvhead[i])) - bkvhead[i] = NULL; + if (put_cached_bnode(krcp, bnode)) + bnode = NULL; raw_spin_unlock_irqrestore(&krcp->lock, flags); - if (bkvhead[i]) - free_page((unsigned long) bkvhead[i]); + if (bnode) + free_page((unsigned long) bnode); cond_resched_tasks_rcu_qs(); } @@ -3149,7 +3145,7 @@ need_offload_krc(struct kfree_rcu_cpu *krcp) int i; for (i = 0; i < FREE_N_CHANNELS; i++) - if (krcp->bkvhead[i]) + if (!list_empty(&krcp->bulk_head[i])) return true; return !!krcp->head; @@ -3186,21 +3182,20 @@ static void kfree_rcu_monitor(struct work_struct *work) for (i = 0; i < KFREE_N_BATCHES; i++) { struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]); - // Try to detach bkvhead or head and attach it over any + // Try to detach bulk_head or head and attach it over any // available corresponding free channel. It can be that // a previous RCU batch is in progress, it means that // immediately to queue another one is not possible so // in that case the monitor work is rearmed. - if ((krcp->bkvhead[0] && !krwp->bkvhead_free[0]) || - (krcp->bkvhead[1] && !krwp->bkvhead_free[1]) || + if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) || + (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) || (krcp->head && !krwp->head_free)) { + // Channel 1 corresponds to the SLAB-pointer bulk path. // Channel 2 corresponds to vmalloc-pointer bulk path. for (j = 0; j < FREE_N_CHANNELS; j++) { - if (!krwp->bkvhead_free[j]) { - krwp->bkvhead_free[j] = krcp->bkvhead[j]; - krcp->bkvhead[j] = NULL; - } + if (list_empty(&krwp->bulk_head_free[j])) + list_replace_init(&krcp->bulk_head[j], &krwp->bulk_head_free[j]); } // Channel 3 corresponds to both SLAB and vmalloc @@ -3312,10 +3307,11 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, return false; idx = !!is_vmalloc_addr(ptr); + bnode = list_first_entry_or_null(&(*krcp)->bulk_head[idx], + struct kvfree_rcu_bulk_data, list); /* Check if a new block is required. */ - if (!(*krcp)->bkvhead[idx] || - (*krcp)->bkvhead[idx]->nr_records == KVFREE_BULK_MAX_ENTR) { + if (!bnode || bnode->nr_records == KVFREE_BULK_MAX_ENTR) { bnode = get_cached_bnode(*krcp); if (!bnode && can_alloc) { krc_this_cpu_unlock(*krcp, *flags); @@ -3339,18 +3335,13 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, if (!bnode) return false; - /* Initialize the new block. */ + // Initialize the new block and attach it. bnode->nr_records = 0; - bnode->next = (*krcp)->bkvhead[idx]; - - /* Attach it to the head. */ - (*krcp)->bkvhead[idx] = bnode; + list_add(&bnode->list, &(*krcp)->bulk_head[idx]); } /* Finally insert. */ - (*krcp)->bkvhead[idx]->records - [(*krcp)->bkvhead[idx]->nr_records++] = ptr; - + bnode->records[bnode->nr_records++] = ptr; return true; } @@ -4790,7 +4781,7 @@ struct workqueue_struct *rcu_gp_wq; static void __init kfree_rcu_batch_init(void) { int cpu; - int i; + int i, j; /* Clamp it to [0:100] seconds interval. */ if (rcu_delay_page_cache_fill_msec < 0 || @@ -4810,8 +4801,14 @@ static void __init kfree_rcu_batch_init(void) for (i = 0; i < KFREE_N_BATCHES; i++) { INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; + + for (j = 0; j < FREE_N_CHANNELS; j++) + INIT_LIST_HEAD(&krcp->krw_arr[i].bulk_head_free[j]); } + for (i = 0; i < FREE_N_CHANNELS; i++) + INIT_LIST_HEAD(&krcp->bulk_head[i]); + INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor); INIT_DELAYED_WORK(&krcp->page_cache_work, fill_page_cache_func); krcp->initialized = true; From patchwork Mon Nov 28 15:36:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13057718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D54CC4321E for ; Mon, 28 Nov 2022 15:36:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231858AbiK1Pgk (ORCPT ); Mon, 28 Nov 2022 10:36:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231219AbiK1Pgj (ORCPT ); Mon, 28 Nov 2022 10:36:39 -0500 Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [IPv6:2a00:1450:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65AB212AAE; Mon, 28 Nov 2022 07:36:38 -0800 (PST) Received: by mail-ej1-x62d.google.com with SMTP id td2so12699407ejc.5; Mon, 28 Nov 2022 07:36:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IsG879eIKhP8V3VjjSrfZO129/c6P3Y13iqVAmLZmiU=; b=D4zcCWdIo3sj9cIoqxj04QwiUHZUwS4WOHnrSLuPRtjQPNo0Ugg2jkQP10KAxRDSpT +6QmGMdlbaVy7GvRM8g3Cq/PKpwCsuyxUt1U4NW+In5TKsDJ7wx0Zr6ONc+sSqr2/WXj 3pmuF+i47x3GoNfr/iIt2wkagNtETrIVCgwoQGRgHuD9VZZNFkD3eLzP7cviPbKjZhTL JKUIHqLWR+5dtW/kv1Heu/N1yS4vSl9d2uTH/A4Cj6YkRVNdutSoUisWetyv5gNoqPQc NFWgZBLWJFVGNjpvoTuH5jUewmmgQwDXo7sQePX/tNPciDebGCTEwn6vDtdY8stmjdm0 tACA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IsG879eIKhP8V3VjjSrfZO129/c6P3Y13iqVAmLZmiU=; b=wf7XkCbtiERvEE5QU85c11VKeRlJb+KNFEVnx9YEVrbTUPfaLrZY0UQiILb38bPWPG wgFI5X8JmW0Sntd9liHcFCNsa2dx0PksSDzcg8h8kWrs3qKnlsBEcqHctIfv5BoAFwB8 M+rW1YW4/5qj2LAQzfC7jY+mzvkBBkppMy1zB1IJrBKiYz7ISH6NOxrBiu2N9vxl80bq YhuZE7q6dw5KKeQk4smnWogyCiXmKOiIG4VoUmeNDuVdcxRoKUumCuVc5mNzNrE5JCcf +Sauvc9eUaUQqgyuKwO0wZtOpbEXd3a69Hb9N2fNxDu9VpGiSi6EMdUEtlAxgDyapqIn 020Q== X-Gm-Message-State: ANoB5plfcq26IauGJlxt9nkPZUzU6q+QBOdeC7kEZi//fARSE4YvGwid L28/spbBz7vw8hGQ6EPLoUQa6bKNeg4= X-Google-Smtp-Source: AA0mqf4O8lDYZhUOT9y+b5zfESjQd+TBhELW8VQ7dm7EjIxA7si3FeKX8pnl8YbtVfUSKHzlkLj9GA== X-Received: by 2002:a17:906:6dd5:b0:78d:a633:b55 with SMTP id j21-20020a1709066dd500b0078da6330b55mr45731910ejt.106.1669649796944; Mon, 28 Nov 2022 07:36:36 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id m7-20020aa7d347000000b00459ad800bbcsm5303306edr.33.2022.11.28.07.36.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Nov 2022 07:36:36 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: LKML , RCU , "Paul E . McKenney" Cc: Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 2/4] rcu/kvfree: Move bulk/list reclaim to separate functions Date: Mon, 28 Nov 2022 16:36:26 +0100 Message-Id: <20221128153628.541361-2-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221128153628.541361-1-urezki@gmail.com> References: <20221128153628.541361-1-urezki@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org There are two different paths how a memory is reclaimed. Currently it is open-coded what makes it a bit messy and less easy to read. Introduce two separate functions kvfree_rcu_list() and kvfree_rcu_bulk() to cover two independent cases. Please note, this patch does not introduce any functional change it is only limited by refactoring of code. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 114 ++++++++++++++++++++++++++-------------------- 1 file changed, 65 insertions(+), 49 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 74d6889dcc50..3b5f6036d884 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3053,6 +3053,65 @@ drain_page_cache(struct kfree_rcu_cpu *krcp) return freed; } +static void +kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, + struct kvfree_rcu_bulk_data *bnode, int idx) +{ + unsigned long flags; + int i; + + debug_rcu_bhead_unqueue(bnode); + + rcu_lock_acquire(&rcu_callback_map); + if (idx == 0) { // kmalloc() / kfree(). + trace_rcu_invoke_kfree_bulk_callback( + rcu_state.name, bnode->nr_records, + bnode->records); + + kfree_bulk(bnode->nr_records, bnode->records); + } else { // vmalloc() / vfree(). + for (i = 0; i < bnode->nr_records; i++) { + trace_rcu_invoke_kvfree_callback( + rcu_state.name, bnode->records[i], 0); + + vfree(bnode->records[i]); + } + } + rcu_lock_release(&rcu_callback_map); + + raw_spin_lock_irqsave(&krcp->lock, flags); + if (put_cached_bnode(krcp, bnode)) + bnode = NULL; + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + if (bnode) + free_page((unsigned long) bnode); + + cond_resched_tasks_rcu_qs(); +} + +static void +kvfree_rcu_list(struct rcu_head *head) +{ + struct rcu_head *next; + + for (; head; head = next) { + unsigned long offset = (unsigned long)head->func; + void *ptr = (void *)head - offset; + + next = head->next; + debug_rcu_head_unqueue((struct rcu_head *)ptr); + rcu_lock_acquire(&rcu_callback_map); + trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); + + if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) + kvfree(ptr); + + rcu_lock_release(&rcu_callback_map); + cond_resched_tasks_rcu_qs(); + } +} + /* * This function is invoked in workqueue context after a grace period. * It frees all the objects queued on ->bulk_head_free or ->head_free. @@ -3062,10 +3121,10 @@ static void kfree_rcu_work(struct work_struct *work) unsigned long flags; struct kvfree_rcu_bulk_data *bnode, *n; struct list_head bulk_head[FREE_N_CHANNELS]; - struct rcu_head *head, *next; + struct rcu_head *head; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; - int i, j; + int i; krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); @@ -3082,38 +3141,9 @@ static void kfree_rcu_work(struct work_struct *work) raw_spin_unlock_irqrestore(&krcp->lock, flags); // Handle the first two channels. - for (i = 0; i < FREE_N_CHANNELS; i++) { - list_for_each_entry_safe(bnode, n, &bulk_head[i], list) { - debug_rcu_bhead_unqueue(bnode); - - rcu_lock_acquire(&rcu_callback_map); - if (i == 0) { // kmalloc() / kfree(). - trace_rcu_invoke_kfree_bulk_callback( - rcu_state.name, bnode->nr_records, - bnode->records); - - kfree_bulk(bnode->nr_records, bnode->records); - } else { // vmalloc() / vfree(). - for (j = 0; j < bnode->nr_records; j++) { - trace_rcu_invoke_kvfree_callback( - rcu_state.name, bnode->records[j], 0); - - vfree(bnode->records[j]); - } - } - rcu_lock_release(&rcu_callback_map); - - raw_spin_lock_irqsave(&krcp->lock, flags); - if (put_cached_bnode(krcp, bnode)) - bnode = NULL; - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - if (bnode) - free_page((unsigned long) bnode); - - cond_resched_tasks_rcu_qs(); - } - } + for (i = 0; i < FREE_N_CHANNELS; i++) + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) + kvfree_rcu_bulk(krcp, bnode, i); /* * This is used when the "bulk" path can not be used for the @@ -3122,21 +3152,7 @@ static void kfree_rcu_work(struct work_struct *work) * queued on a linked list through their rcu_head structures. * This list is named "Channel 3". */ - for (; head; head = next) { - unsigned long offset = (unsigned long)head->func; - void *ptr = (void *)head - offset; - - next = head->next; - debug_rcu_head_unqueue((struct rcu_head *)ptr); - rcu_lock_acquire(&rcu_callback_map); - trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); - - if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) - kvfree(ptr); - - rcu_lock_release(&rcu_callback_map); - cond_resched_tasks_rcu_qs(); - } + kvfree_rcu_list(head); } static bool From patchwork Mon Nov 28 15:36:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13057719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99245C433FE for ; Mon, 28 Nov 2022 15:36:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231865AbiK1Pgk (ORCPT ); Mon, 28 Nov 2022 10:36:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231831AbiK1Pgk (ORCPT ); Mon, 28 Nov 2022 10:36:40 -0500 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FA956401; Mon, 28 Nov 2022 07:36:39 -0800 (PST) Received: by mail-ej1-x634.google.com with SMTP id b2so10510254eja.7; Mon, 28 Nov 2022 07:36:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PYAf4AreYmBz18/KfqMmrHVf4hGjWL0YCDiLtGAMv0s=; b=mfDZ7ItZ68Bhn5AV8lrSCkJllKWouwxPIw6iNq2Wg/0TZ/2bHggLAeoGlAy17VBLRH VKXBRlw//0RShxjde2qbiVEcm7Uc/BJE5kD5ijRqSqtOutCplNRgE3hoLl4L/avhUvXZ jcLbJ1vzxt5sAyaRERPmtwCikp22ww7FU+OjtsU9PdGjYJMTSpr5SIJgSqvaJelZrO8e d+D81DQAcypKxtwfRZOSDEGybSIWg76umwmOF2SDGCOsTzEv1OEwKEztWOX9MXypUrtv uNaXLz5zU3nJqWA26AdNiZyCsmUzdEBQk5S6jjmeWCFdr0FO1U1pu7vxF4Dc8/0x9joo b04w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PYAf4AreYmBz18/KfqMmrHVf4hGjWL0YCDiLtGAMv0s=; b=KKhP41p2cwrGyyugscMEkesG5cI8u0GmDccLPp+3kIZ21HNfpopQfqBYWrnLF4tSvE VDLuFEFt3/TzSkHPTjMWUn/p+KLLCPDJsGPRmG5Csz2n6EQyRyZdUCRGFmln1eo7LCTB qELzA9kz4opmUIt/sbjRdMF1KmUqGZWlpwN3Sjlg8KeX+EjK0rraTdIfmdAWzKzxDht0 ayMw/38+YZgVkun89A6tdfU0GqE5Y0SAVoiJnGPm83hooNo+sCHIY9YsY7Lm+ouHmBYJ gr3FHGks0EhAudMhNLlAj/xRsOXlvpD3mRL2VGYLXMFyU/NTthCcwQx1oa15OiiZVfyE 0QlA== X-Gm-Message-State: ANoB5pl5hx9icjV/f6/Nb5g03AeNCxLCQkLCET0MjWHh1TpEboyBCeFo RSkmiWqvkjUEnwiH0UMFUvAjrnFBLiE= X-Google-Smtp-Source: AA0mqf7p1/h5TyUOHLoAIREZ4AyYDmPJmuPeIP6YBV/w1uwc8SF1hqN2z+fm2RfO5m9ZgiCf3ACSqA== X-Received: by 2002:a17:907:b014:b0:7b4:86be:f3e3 with SMTP id fu20-20020a170907b01400b007b486bef3e3mr36893738ejc.741.1669649797675; Mon, 28 Nov 2022 07:36:37 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id m7-20020aa7d347000000b00459ad800bbcsm5303306edr.33.2022.11.28.07.36.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Nov 2022 07:36:37 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: LKML , RCU , "Paul E . McKenney" Cc: Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 3/4] rcu/kvfree: Move need_offload_krc() out of krcp->lock Date: Mon, 28 Nov 2022 16:36:27 +0100 Message-Id: <20221128153628.541361-3-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221128153628.541361-1-urezki@gmail.com> References: <20221128153628.541361-1-urezki@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Currently a need_offload_krc() function requires the krcp->lock to be held because krcp->head can not be checked concurrently. Fix it by updating the krcp->head using WRITE_ONCE() macro so it becomes lock-free and safe for readers to see a valid data without any locking. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 3b5f6036d884..f68ddbef2a33 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3218,7 +3218,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // objects queued on the linked list. if (!krwp->head_free) { krwp->head_free = krcp->head; - krcp->head = NULL; + WRITE_ONCE(krcp->head, NULL); } WRITE_ONCE(krcp->count, 0); @@ -3232,6 +3232,8 @@ static void kfree_rcu_monitor(struct work_struct *work) } } + raw_spin_unlock_irqrestore(&krcp->lock, flags); + // If there is nothing to detach, it means that our job is // successfully done here. In case of having at least one // of the channels that is still busy we should rearm the @@ -3239,8 +3241,6 @@ static void kfree_rcu_monitor(struct work_struct *work) // still in progress. if (need_offload_krc(krcp)) schedule_delayed_monitor_work(krcp); - - raw_spin_unlock_irqrestore(&krcp->lock, flags); } static enum hrtimer_restart @@ -3415,7 +3415,7 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) head->func = func; head->next = krcp->head; - krcp->head = head; + WRITE_ONCE(krcp->head, head); success = true; } @@ -3492,15 +3492,12 @@ static struct shrinker kfree_rcu_shrinker = { void __init kfree_rcu_scheduler_running(void) { int cpu; - unsigned long flags; for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - raw_spin_lock_irqsave(&krcp->lock, flags); if (need_offload_krc(krcp)) schedule_delayed_monitor_work(krcp); - raw_spin_unlock_irqrestore(&krcp->lock, flags); } } From patchwork Mon Nov 28 15:36:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13057720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B026C43217 for ; Mon, 28 Nov 2022 15:36:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231886AbiK1Pgm (ORCPT ); Mon, 28 Nov 2022 10:36:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231954AbiK1Pgl (ORCPT ); Mon, 28 Nov 2022 10:36:41 -0500 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5DA112AAE; Mon, 28 Nov 2022 07:36:39 -0800 (PST) Received: by mail-ej1-x634.google.com with SMTP id td2so12699596ejc.5; Mon, 28 Nov 2022 07:36:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fjZC8oz4eSz/h0GXB4XNuXwYGePhtNfkeI4YWERXdjQ=; b=lScR4JUm2TXvlzPIwB3dFKToQLqP7DNfSBzccmwNbLEP0T5PHkIvfgMcOzVMky2VaH X7wHAAn9GooxUSTYfiZF9Q6rFqdEYkYsS7rEfAzu6rSeGTie2DJRm7pDPJlbwKA3t6gC 4vGwm/gZaLSwtAdEg2OxbAD2MwSH0W5K1SsaZkRs5CHf7/CZvv3DJeALpgPpPjq0xcWA MmauCF4saFG9k3quttAxXuF9AhUXH6BfZQeRKvo55NMJzKkUP5ca6R/fnfW72Cie1GYK Zx5rfW4StO3nuBMwEXOg9w4W6Y86NDxTwcLfZ0lYidhR9gIOmcJlghf6gd4OFOJfBHJr t/sQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fjZC8oz4eSz/h0GXB4XNuXwYGePhtNfkeI4YWERXdjQ=; b=f1qiidA4zHSQC4jVfhmJA1J99AJomb4y890Mjehzxy4rfd5T5Rmxcm5myZqqJ1r/wF oUK3Lwp80Qggji1amFkkS/e9+NLBeLZ8+jqcr4CfDx82KR/EyibXT4LVjsOFIfCGBP5p Z3p6RwmNesMY+Mcip7Lj0L7/qhxjIWg9JxSa1LKnQx36raghAuVGQKbw+t4TdQsGwZ6+ mni1dDNB84QZpXuLd6S9mcJxiFh74hiCfEUQGq3Unq73R4DX5kGEzWD13KBe4BmyKEvg ICLemN90ZgkHFWn2nBekQzqiFNUMkwf6De4oFT1YvyhKyTkQFiDw4vPE20lfsDc3TwQM YprA== X-Gm-Message-State: ANoB5pngzmwgzflHFF2vo1YG1B7QNsADQgfj1KrwYY/DUS4CbdmmhCDX AmzlJ1wDt+AQNeT42KYwCZUZs4E7YyE= X-Google-Smtp-Source: AA0mqf5pWrze0pyrPy0dr5tuOj4yhIVmFT8M+nbUHASc/TKnp6zC65YcfyPc6dZ1Z+32Uoq6pnSZVA== X-Received: by 2002:a17:906:5f88:b0:7c0:7fda:7f7d with SMTP id a8-20020a1709065f8800b007c07fda7f7dmr1267587eju.424.1669649798386; Mon, 28 Nov 2022 07:36:38 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id m7-20020aa7d347000000b00459ad800bbcsm5303306edr.33.2022.11.28.07.36.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Nov 2022 07:36:38 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: LKML , RCU , "Paul E . McKenney" Cc: Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 4/4] rcu/kvfree: Use a polled API to speedup a reclaim process Date: Mon, 28 Nov 2022 16:36:28 +0100 Message-Id: <20221128153628.541361-4-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221128153628.541361-1-urezki@gmail.com> References: <20221128153628.541361-1-urezki@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Currently all objects placed into a batch require a full GP after passing which objects in a batch are eligible to be freed. The problem is that many pointers may already passed several GP sequences so there is no need for them in extra delay and such objects can be reclaimed right away without waiting. In order to reduce a memory footprint this patch introduces a per-page-grace-period-controlling mechanism. It allows us to distinguish pointers for which a grace period is passed and for which not. A reclaim thread in its turn frees a memory in a reverse order starting from a tail because a GP is likely passed for objects in a page. If a page with a GP sequence in a list hits a condition when a GP is not ready we bail out requesting one more grace period in order to complete a drain process for left pages. Test example: kvm.sh --memory 10G --torture rcuscale --allcpus --duration 1 \ --kconfig CONFIG_NR_CPUS=64 \ --kconfig CONFIG_RCU_NOCB_CPU=y \ --kconfig CONFIG_RCU_NOCB_CPU_DEFAULT_ALL=y \ --kconfig CONFIG_RCU_LAZY=n \ --bootargs "rcuscale.kfree_rcu_test=1 rcuscale.kfree_nthreads=16 \ rcuscale.holdoff=20 rcuscale.kfree_loops=10000 \ torture.disable_onoff_at_boot" --trust-make Total time taken by all kfree'ers: 8535693700 ns, loops: 10000, batches: 1188, memory footprint: 2248MB Total time taken by all kfree'ers: 8466933582 ns, loops: 10000, batches: 1157, memory footprint: 2820MB Total time taken by all kfree'ers: 5375602446 ns, loops: 10000, batches: 1130, memory footprint: 6502MB Total time taken by all kfree'ers: 7523283832 ns, loops: 10000, batches: 1006, memory footprint: 3343MB Total time taken by all kfree'ers: 6459171956 ns, loops: 10000, batches: 1150, memory footprint: 6549MB Total time taken by all kfree'ers: 8560060176 ns, loops: 10000, batches: 1787, memory footprint: 61MB Total time taken by all kfree'ers: 8573885501 ns, loops: 10000, batches: 1777, memory footprint: 93MB Total time taken by all kfree'ers: 8320000202 ns, loops: 10000, batches: 1727, memory footprint: 66MB Total time taken by all kfree'ers: 8552718794 ns, loops: 10000, batches: 1790, memory footprint: 75MB Total time taken by all kfree'ers: 8601368792 ns, loops: 10000, batches: 1724, memory footprint: 62MB Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 47 +++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 8 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f68ddbef2a33..b41241994672 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2901,11 +2901,13 @@ EXPORT_SYMBOL_GPL(call_rcu); /** * struct kvfree_rcu_bulk_data - single block to store kvfree_rcu() pointers * @list: List node. All blocks are linked between each other + * @gp_snap: Snapshot of RCU state for objects placed to this bulk * @nr_records: Number of active pointers in the array * @records: Array of the kvfree_rcu() pointers */ struct kvfree_rcu_bulk_data { struct list_head list; + unsigned long gp_snap; unsigned long nr_records; void *records[]; }; @@ -2922,13 +2924,15 @@ struct kvfree_rcu_bulk_data { * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period * @head_free: List of kfree_rcu() objects waiting for a grace period + * @head_free_gp_snap: Snapshot of RCU state for objects placed to "@head_free" * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period * @krcp: Pointer to @kfree_rcu_cpu structure */ struct kfree_rcu_cpu_work { - struct rcu_work rcu_work; + struct work_struct rcu_work; struct rcu_head *head_free; + unsigned long head_free_gp_snap; struct list_head bulk_head_free[FREE_N_CHANNELS]; struct kfree_rcu_cpu *krcp; }; @@ -3124,10 +3128,11 @@ static void kfree_rcu_work(struct work_struct *work) struct rcu_head *head; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; + unsigned long head_free_gp_snap; int i; - krwp = container_of(to_rcu_work(work), - struct kfree_rcu_cpu_work, rcu_work); + krwp = container_of(work, + struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; raw_spin_lock_irqsave(&krcp->lock, flags); @@ -3138,12 +3143,29 @@ static void kfree_rcu_work(struct work_struct *work) // Channel 3. head = krwp->head_free; krwp->head_free = NULL; + head_free_gp_snap = krwp->head_free_gp_snap; raw_spin_unlock_irqrestore(&krcp->lock, flags); // Handle the first two channels. - for (i = 0; i < FREE_N_CHANNELS; i++) + for (i = 0; i < FREE_N_CHANNELS; i++) { + // Start from the tail page, so a GP is likely passed for it. + list_for_each_entry_safe_reverse(bnode, n, &bulk_head[i], list) { + // Not yet ready? Bail out since we need one more GP. + if (!poll_state_synchronize_rcu(bnode->gp_snap)) + break; + + list_del_init(&bnode->list); + kvfree_rcu_bulk(krcp, bnode, i); + } + + // Please note a request for one more extra GP can + // occur only once for all objects in this batch. + if (!list_empty(&bulk_head[i])) + synchronize_rcu(); + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) kvfree_rcu_bulk(krcp, bnode, i); + } /* * This is used when the "bulk" path can not be used for the @@ -3152,7 +3174,10 @@ static void kfree_rcu_work(struct work_struct *work) * queued on a linked list through their rcu_head structures. * This list is named "Channel 3". */ - kvfree_rcu_list(head); + if (head) { + cond_synchronize_rcu(head_free_gp_snap); + kvfree_rcu_list(head); + } } static bool @@ -3219,6 +3244,11 @@ static void kfree_rcu_monitor(struct work_struct *work) if (!krwp->head_free) { krwp->head_free = krcp->head; WRITE_ONCE(krcp->head, NULL); + + // Take a snapshot for this krwp. Please note no more + // any objects can be added to attached head_free channel + // therefore fixate a GP for it here. + krwp->head_free_gp_snap = get_state_synchronize_rcu(); } WRITE_ONCE(krcp->count, 0); @@ -3228,7 +3258,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // be that the work is in the pending state when // channels have been detached following by each // other. - queue_rcu_work(system_wq, &krwp->rcu_work); + queue_work(system_wq, &krwp->rcu_work); } } @@ -3356,8 +3386,9 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, list_add(&bnode->list, &(*krcp)->bulk_head[idx]); } - /* Finally insert. */ + // Finally insert and update the GP for this page. bnode->records[bnode->nr_records++] = ptr; + bnode->gp_snap = get_state_synchronize_rcu(); return true; } @@ -4812,7 +4843,7 @@ static void __init kfree_rcu_batch_init(void) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); for (i = 0; i < KFREE_N_BATCHES; i++) { - INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); + INIT_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; for (j = 0; j < FREE_N_CHANNELS; j++)