From patchwork Tue Jan 30 09:11:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537038 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3283159B49 for ; Tue, 30 Jan 2024 09:13:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605988; cv=none; b=TTW/CTKlNfAeSIdNEPG92C4pr4Ekp1seYSkLeD9AI77a3oSdy4lQitcadenICrD94un2tEmn4NNv59tXWnaGqzuJCcJGTHy1V74mriQlo/keBxZ9HRZDTrzJjrB5K3iSoOHgiFUgwSNsHNNz91StmN3/nBmWyDr61RAX8eLyvLo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605988; c=relaxed/simple; bh=cpO6m2Pr5p3n/qePucxFD1UXJhPW8vRxbzSH1keUjbw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j36elpotL/yFxuikaD74kn3u5Hk/pu3PoNRoJO/2V7h4Ow6GBeGiEbah6vb+zQusofnY9WTvRYYQBjLPo+CLCnEK9g6OuwH0A1dyvQ8w7jTzHOtc4hWOrxzriPM16cGJK7bOJL9QfptYKHdgC3ODpt8yw21dV4ineTJXIgHpBXA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=TllUN9sF; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TllUN9sF" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1d8dc0172d7so10325135ad.1 for ; Tue, 30 Jan 2024 01:13:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605986; x=1707210786; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=0f+E+7Yr7tZg5UZ5N5kXMlDMZoo9OCBh+FLJ8A0xtV0=; b=TllUN9sFsT4hZc5vuXPN4nyrXOC1PM7oo+s4Tkcs1ULTaV1YfJcQbnZINtPnB9rCiQ ANQXJs25MVth50bkPe9NIcHRcA6GtJ7ALIxr1axTXSbjM1NdWFwrAMXL5BeIC/9gQBKj X56VSsPFC7K6qEneSWSqZWbPAgIvY+si15W7dSjLERbiokDh8m18eEDfIOXw6cIgpG8V vpXTvIk+vtVQ1uDpC02r4a7Z+AFyF0DWHffMCNGI2TDmm6NamX3NMmoATVtN5PuRCtvz kXUIzggntbD3JC55fFRBxOtATYrofVa7vx4VQa+U64YFpPk3lRS/ApBDD5M9SLUv4OXs b9UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605986; x=1707210786; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=0f+E+7Yr7tZg5UZ5N5kXMlDMZoo9OCBh+FLJ8A0xtV0=; b=UEItX0iKNow13miqAC9nCx2AqcAxP8nZ8bcJXrOq6du+8TNgwgPhDrNiOUp2VACo7m GlcZOQBJuXYbRDycOo6dNa+PNdrXEmEJnBEkPEkBSWgvfQDtfeUqXNrgvC81Hn8afnU4 5Gfehc0W9NAPReegazF0VujR+TitR2Foq9fWOym+HCxg45rUK8U97F2nki/GrjPPt9ax mqnpvQq/egna70s0SEqMD/9Tza93euc1UYPaIVGcguBjN80/1jTTd31vHnztXVHguDsN bY3T6RxYRSdzXjjMkE66xkGeG07M7oADVqCa3tTkHUiNJOq/SnbuiOwy+p6KTyTvETBE 5eWg== X-Gm-Message-State: AOJu0YwhshbQHqMK8oA9isdBWtIORFaUEmE/ClLrkdUtYvHnl98tSwiF OalMWyumAWPMH8WfWo2btcJ3O0Uon4HeBs4LaZiuiIGeboFE4IcYVi7SLtfW X-Google-Smtp-Source: AGHT+IHW8nd6s3CduJC+/WZAHKizzZVidAfFEzNUZ1XDA7JjpE2326FNHUnbnisZAxd67Me78MzjWA== X-Received: by 2002:a17:903:491:b0:1d8:eb22:5f25 with SMTP id jj17-20020a170903049100b001d8eb225f25mr2684625plb.102.1706605986265; Tue, 30 Jan 2024 01:13:06 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id c3-20020a170902d90300b001d901e176afsm1171941plz.232.2024.01.30.01.13.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:05 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo Subject: [PATCH 1/8] workqueue: Update lock debugging code Date: Mon, 29 Jan 2024 23:11:48 -1000 Message-ID: <20240130091300.2968534-2-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 These changes are in preparation of BH workqueue which will execute work items from BH context. - Update lock and RCU depth checks in process_one_work() so that it remembers and checks against the starting depths and prints out the depth changes. - Factor out lockdep annotations in the flush paths into touch_{wq|work}_lockdep_map(). The work->lockdep_map touching is moved from __flush_work() to its callee - start_flush_work(). This brings it closer to the wq counterpart and will allow testing the associated wq's flags which will be needed to support BH workqueues. This is not expected to cause any functional changes. Signed-off-by: Tejun Heo --- kernel/workqueue.c | 51 ++++++++++++++++++++++++++++++---------------- 1 file changed, 34 insertions(+), 17 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 9221a4c57ae1..3f2081bd05a4 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2954,6 +2954,7 @@ __acquires(&pool->lock) struct pool_workqueue *pwq = get_work_pwq(work); struct worker_pool *pool = worker->pool; unsigned long work_data; + int lockdep_start_depth, rcu_start_depth; #ifdef CONFIG_LOCKDEP /* * It is permissible to free the struct work_struct from @@ -3016,6 +3017,8 @@ __acquires(&pool->lock) pwq->stats[PWQ_STAT_STARTED]++; raw_spin_unlock_irq(&pool->lock); + rcu_start_depth = rcu_preempt_depth(); + lockdep_start_depth = lockdep_depth(current); lock_map_acquire(&pwq->wq->lockdep_map); lock_map_acquire(&lockdep_map); /* @@ -3051,12 +3054,15 @@ __acquires(&pool->lock) lock_map_release(&lockdep_map); lock_map_release(&pwq->wq->lockdep_map); - if (unlikely(in_atomic() || lockdep_depth(current) > 0 || - rcu_preempt_depth() > 0)) { - pr_err("BUG: workqueue leaked lock or atomic: %s/0x%08x/%d/%d\n" - " last function: %ps\n", - current->comm, preempt_count(), rcu_preempt_depth(), - task_pid_nr(current), worker->current_func); + if (unlikely((worker->task && in_atomic()) || + lockdep_depth(current) != lockdep_start_depth || + rcu_preempt_depth() != rcu_start_depth)) { + pr_err("BUG: workqueue leaked atomic, lock or RCU: %s[%d]\n" + " preempt=0x%08x lock=%d->%d RCU=%d->%d workfn=%ps\n", + current->comm, task_pid_nr(current), preempt_count(), + lockdep_start_depth, lockdep_depth(current), + rcu_start_depth, rcu_preempt_depth(), + worker->current_func); debug_show_held_locks(current); dump_stack(); } @@ -3538,6 +3544,19 @@ static bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq, return wait; } +static void touch_wq_lockdep_map(struct workqueue_struct *wq) +{ + lock_map_acquire(&wq->lockdep_map); + lock_map_release(&wq->lockdep_map); +} + +static void touch_work_lockdep_map(struct work_struct *work, + struct workqueue_struct *wq) +{ + lock_map_acquire(&work->lockdep_map); + lock_map_release(&work->lockdep_map); +} + /** * __flush_workqueue - ensure that any scheduled work has run to completion. * @wq: workqueue to flush @@ -3557,8 +3576,7 @@ void __flush_workqueue(struct workqueue_struct *wq) if (WARN_ON(!wq_online)) return; - lock_map_acquire(&wq->lockdep_map); - lock_map_release(&wq->lockdep_map); + touch_wq_lockdep_map(wq); mutex_lock(&wq->mutex); @@ -3757,6 +3775,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, struct worker *worker = NULL; struct worker_pool *pool; struct pool_workqueue *pwq; + struct workqueue_struct *wq; might_sleep(); @@ -3780,11 +3799,14 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, pwq = worker->current_pwq; } - check_flush_dependency(pwq->wq, work); + wq = pwq->wq; + check_flush_dependency(wq, work); insert_wq_barrier(pwq, barr, work, worker); raw_spin_unlock_irq(&pool->lock); + touch_work_lockdep_map(work, wq); + /* * Force a lock recursion deadlock when using flush_work() inside a * single-threaded or rescuer equipped workqueue. @@ -3794,11 +3816,9 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, * workqueues the deadlock happens when the rescuer stalls, blocking * forward progress. */ - if (!from_cancel && - (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)) { - lock_map_acquire(&pwq->wq->lockdep_map); - lock_map_release(&pwq->wq->lockdep_map); - } + if (!from_cancel && (wq->saved_max_active == 1 || wq->rescuer)) + touch_wq_lockdep_map(wq); + rcu_read_unlock(); return true; already_gone: @@ -3817,9 +3837,6 @@ static bool __flush_work(struct work_struct *work, bool from_cancel) if (WARN_ON(!work->func)) return false; - lock_map_acquire(&work->lockdep_map); - lock_map_release(&work->lockdep_map); - if (start_flush_work(work, &barr, from_cancel)) { wait_for_completion(&barr.done); destroy_work_on_stack(&barr.work); From patchwork Tue Jan 30 09:11:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537039 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-oi1-f176.google.com (mail-oi1-f176.google.com [209.85.167.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F001C60885 for ; Tue, 30 Jan 2024 09:13:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605990; cv=none; b=ZUoOxRb9AR9oCQDP5m8vNJotFJLMxK6fAYDRRcK+Vmi2I37K/BioTnvKNhNkDLFiDjkqiQHSUlUvK/740yFeuz59J9x+h7+7xrIHaQ4zG3K9AfUxuUCcDghBSj4k5ezM8R1H+rmZd6mwx6thbSSdYpYnXJfEbF/E1ZF8PtmNOQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605990; c=relaxed/simple; bh=bRdnyNKa81dUnoj0MB6Zg9a2NEYVmN96xNxLaxLYnG8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BfPUMhP0wB4omolydO3Nd2MF021CwvaMOXijhe11qVzNuZQW2On+15DjJeLxvOzqE25MGN3qNkzNJlDk97ma2Kd2jcGmsTXPeheVFHPOcAOO6S/njcQgYnv/CGqnEAYPq8GBg5omaJDdnSv01VzAsec2GSwh5obYyx9Kel7Y090= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=VApt6EE9; arc=none smtp.client-ip=209.85.167.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="VApt6EE9" Received: by mail-oi1-f176.google.com with SMTP id 5614622812f47-3be90c51299so323350b6e.0 for ; Tue, 30 Jan 2024 01:13:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605988; x=1707210788; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=xblJk8GY6hKfGdpNUu6EAuoAaIEFTkAgpkYfucVU564=; b=VApt6EE9UhjjKNYTTa2Xmo5FoI6klkWNOMnP39VX/WILatE4Hii2e/TsG7+xmd7Ore 50BX5nIAwZFbRY2pN1TZsSBTs4EryQMw4SVZtn8eUKSpagTxWcLkZmYt5xPPR6c0QYh3 cL3F7Ov5Nw6PmuHh0C/eD348cKy7Fu7AFONrDkqr4WtCOjiYx37CLJ6v/BxeTiwD95gc wiYerERZD56bxmLjklSxCRSYCAPsypdTohdFy4AMefZRHEvy7IYsVT45mZExIjdGKN7w ZX7rn79HMGyHYLIiihZ4V440r+NYTaf2gq/X7zJpoNxxH8r14i5Y+sF2y4jXEncOjiHF 7JYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605988; x=1707210788; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=xblJk8GY6hKfGdpNUu6EAuoAaIEFTkAgpkYfucVU564=; b=GnRza/GgtQNDxD/0QaTO1CG5TYIxFwAnKKaWbUA8wT/NzuJxFNUZm5J+CAaUN7NT63 Cd+zrR833IMSF0MqwqGd5fwNeB+w2UU5LxrKC0Z8RVCOz2Kj+SDs0VvM3Grwjzu9EXRO HYdTVWjnNyE/aY5OU5bwc48LgvoXC2Ruo7o0q2d38Vu7BXbNLlftci9KEXsISfwUB2XR j/mFgP8j4p/EzMTJ6D6OPVxhp8A75J565QeRGobdptDJXCvTkogzTRnurHepVgdYwyrJ Xidq7CXsScOGyXVID+B2dEGEUpKm86W30YVJ+dgKWXABdo5rrZ3CDWb9rZYd52M90vRi +uUw== X-Gm-Message-State: AOJu0Yz4uH14quJmHRZVFJKxOoCU7sPkKMn9VEw0lMTekLESAq+dumf8 aJ9LvxA33KL5Hv1Pomqa0BCkJPe/o4rRz+k2VaQEEFf7EKmrfSLu X-Google-Smtp-Source: AGHT+IF74Pvxoh28vY90uV/ZCd6iFVdXKTZ6HNiXo3qs/SUNVF0CwPkYPi5FBG9VCztGQlre04bl1g== X-Received: by 2002:a05:6808:23ca:b0:3be:453d:e061 with SMTP id bq10-20020a05680823ca00b003be453de061mr6090110oib.6.1706605987791; Tue, 30 Jan 2024 01:13:07 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id s184-20020a632cc1000000b005cd835182c5sm6780257pgs.79.2024.01.30.01.13.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:07 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo Subject: [PATCH 2/8] workqueue: Factor out init_cpu_worker_pool() Date: Mon, 29 Jan 2024 23:11:49 -1000 Message-ID: <20240130091300.2968534-3-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Factor out init_cpu_worker_pool() from workqueue_init_early(). This is pure reorganization in preparation of BH workqueue support. Signed-off-by: Tejun Heo --- kernel/workqueue.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 3f2081bd05a4..f93554e479c4 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -7135,6 +7135,22 @@ static void __init restrict_unbound_cpumask(const char *name, const struct cpuma cpumask_and(wq_unbound_cpumask, wq_unbound_cpumask, mask); } +static void __init init_cpu_worker_pool(struct worker_pool *pool, int cpu, int nice) +{ + BUG_ON(init_worker_pool(pool)); + pool->cpu = cpu; + cpumask_copy(pool->attrs->cpumask, cpumask_of(cpu)); + cpumask_copy(pool->attrs->__pod_cpumask, cpumask_of(cpu)); + pool->attrs->nice = nice; + pool->attrs->affn_strict = true; + pool->node = cpu_to_node(cpu); + + /* alloc pool ID */ + mutex_lock(&wq_pool_mutex); + BUG_ON(worker_pool_assign_id(pool)); + mutex_unlock(&wq_pool_mutex); +} + /** * workqueue_init_early - early init for workqueue subsystem * @@ -7195,20 +7211,8 @@ void __init workqueue_init_early(void) struct worker_pool *pool; i = 0; - for_each_cpu_worker_pool(pool, cpu) { - BUG_ON(init_worker_pool(pool)); - pool->cpu = cpu; - cpumask_copy(pool->attrs->cpumask, cpumask_of(cpu)); - cpumask_copy(pool->attrs->__pod_cpumask, cpumask_of(cpu)); - pool->attrs->nice = std_nice[i++]; - pool->attrs->affn_strict = true; - pool->node = cpu_to_node(cpu); - - /* alloc pool ID */ - mutex_lock(&wq_pool_mutex); - BUG_ON(worker_pool_assign_id(pool)); - mutex_unlock(&wq_pool_mutex); - } + for_each_cpu_worker_pool(pool, cpu) + init_cpu_worker_pool(pool, cpu, std_nice[i++]); } /* create default unbound and ordered wq attrs */ From patchwork Tue Jan 30 09:11:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537040 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-oo1-f50.google.com (mail-oo1-f50.google.com [209.85.161.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA67260DE0 for ; Tue, 30 Jan 2024 09:13:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605993; cv=none; b=uTJ+aWD4t5ZQtb/LzranhYV4fi+K67y6LqpFuLCL71SurjVU6Jmo/waIM7pvb/9T9vlC11wrGSZsEdrLKvO/qe5+/BjTNxfV4s8nqbxYJFWTtpkHbpyc94UhepN7IZ+dn3L73sfFr4bViCjGlj4ower2D5ccINu03y2ajNCi9uw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605993; c=relaxed/simple; bh=n5ovxjgKXxz3xrGD0VSunNYrkA1jLIExmuTHCY+ShpA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=prZDfVwxrOWyAT2IuVtNy3fTAbe2NdhDxx3eoE+ufoGpw0MXIfNyWrSKJT7T8Tgg9n22oPfIDP43PorKm5kIRs5zxz1yuu+QeKgUZGCQ3uIwqK2E4ZEBjv85ZGQ//7fngqCd3w9IRTubQiNGUFMhr9igOCL/IOiA6Q8hXoANOJg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=m2+F5Cky; arc=none smtp.client-ip=209.85.161.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="m2+F5Cky" Received: by mail-oo1-f50.google.com with SMTP id 006d021491bc7-59a27b97ea0so1031203eaf.2 for ; Tue, 30 Jan 2024 01:13:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605990; x=1707210790; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=PkEp/4gcqAupvjqcPMOLAPsX77v4dZL1s/KZMiGw7gs=; b=m2+F5Ckyb6iJ0U2H0xJKTfasbhpOaNLqqH/Yl0SCAjoBBDNymnXyO518eyfOZpa+Xi an8i43nBojpADoKE5nVbVt7Ka3DLK6xA113C6f5Q7Stoe5MdGKBjO5nKoAcv0YTgYeVv TLrHPri4+n6xDiNFh0e+UFgYZJwAgJzo74vLrcK8H41VmULZIzC8wuQKGBVrvlYWy3qH UMEXkD/NgK00A5cCwaBX7tYSmqYw3CuJv5hf4iyb9qhbk3FdRzGUcTcVujzgTzNHJT9w 6cQiwaRpUPntCLG7nnINzxTD4HNxReZTig5o0JyEFA01cGoOfBoMeS6EU8np3gpAlOrx FW0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605990; x=1707210790; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=PkEp/4gcqAupvjqcPMOLAPsX77v4dZL1s/KZMiGw7gs=; b=urtSutdFonZEcn6OEsIBfExptGXmdF5oMGj64GoIH6QDkLhDGYtwMjQaTsYLI2ZfDo FKGDu6kVJ8euSgq7opdZvegQRmz2spSx7Fn9GUH3Uxnwt9Bj5RdIvcPwBSzk2bPG6V8H niegItaAi9ltfVNOOAgtmrPfRZWeHA5g5GIpJmziYmXetOO06AyWxLsANRx5GIpAM0In l+9dW9f9s1Ju6C0JZ7mNRDXmfDrlPWBqcJgFakxyREwujeA/O0Yux8nvAAJpJFTaoIxY xQGnqEmDU9TvJ7NUHxvCX5ZwkUfw+qn58+wWY1qGuXqktiZyDKjzAORpUX0eWw8fVyp4 oNvA== X-Gm-Message-State: AOJu0YyKh73ULmQUhX/g7A9v0WyFadQNxqr/zkwaHGrRnF8MvhK2qNro XZSZ5iYGYcjN6vcfBPyMkBVXnS+kBtqxip40EkqnSc1GDpwmkrCBlAD0SsfF X-Google-Smtp-Source: AGHT+IHrWgetrGsf9lbp2jXzTeUILDD07wl2dPrXV2uTy/ZOHdR8WtR8Wyo2W+jvuqcLIWjIxMLe2A== X-Received: by 2002:a05:6359:1008:b0:178:7689:51a2 with SMTP id ib8-20020a056359100800b00178768951a2mr3163171rwb.59.1706605989368; Tue, 30 Jan 2024 01:13:09 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id n34-20020a056a000d6200b006dae568baedsm7278213pfv.24.2024.01.30.01.13.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:09 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo Subject: [PATCH 3/8] workqueue: Implement BH workqueues to eventually replace tasklets Date: Mon, 29 Jan 2024 23:11:50 -1000 Message-ID: <20240130091300.2968534-4-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only generic interface to execute asynchronously in the BH context is tasklet; however, it's marked deprecated and has some design flaws such as the execution code accessing the tasklet item after the execution is complete which can lead to subtle use-after-free in certain usage scenarios and less-developed flush and cancel mechanisms. This patch implements BH workqueues which share the same semantics and features of regular workqueues but execute their work items in the softirq context. As there is always only one BH execution context per CPU, none of the concurrency management mechanisms applies and a BH workqueue can be thought of as a convenience wrapper around softirq. Except for the inability to sleep while executing and lack of max_active adjustments, BH workqueues and work items should behave the same as regular workqueues and work items. Currently, the execution is hooked to tasklet[_hi]. However, the goal is to convert all tasklet users over to BH workqueues. Once the conversion is complete, tasklet can be removed and BH workqueues can directly take over the tasklet softirqs. system_bh[_highpri]_wq are added. As queue-wide flushing doesn't exist in tasklet, all existing tasklet users should be able to use the system BH workqueues without creating their own. Signed-off-by: Tejun Heo Suggested-by: Linus Torvalds Link: http://lkml.kernel.org/r/CAHk-=wjDW53w4-YcSmgKC5RruiRLHmJ1sXeYdp_ZgVoBw=5byA@mail.gmail.com --- Documentation/core-api/workqueue.rst | 29 +++- include/linux/workqueue.h | 9 ++ kernel/workqueue.c | 231 ++++++++++++++++++++++----- kernel/workqueue_internal.h | 3 + tools/workqueue/wq_dump.py | 11 +- 5 files changed, 237 insertions(+), 46 deletions(-) diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/workqueue.rst index 33c4539155d9..2d6af6c4665c 100644 --- a/Documentation/core-api/workqueue.rst +++ b/Documentation/core-api/workqueue.rst @@ -77,10 +77,12 @@ wants a function to be executed asynchronously it has to set up a work item pointing to that function and queue that work item on a workqueue. -Special purpose threads, called worker threads, execute the functions -off of the queue, one after the other. If no work is queued, the -worker threads become idle. These worker threads are managed in so -called worker-pools. +A work item can be executed in either a thread or the BH (softirq) context. + +For threaded workqueues, special purpose threads, called [k]workers, execute +the functions off of the queue, one after the other. If no work is queued, +the worker threads become idle. These worker threads are managed in +worker-pools. The cmwq design differentiates between the user-facing workqueues that subsystems and drivers queue work items on and the backend mechanism @@ -91,6 +93,12 @@ for high priority ones, for each possible CPU and some extra worker-pools to serve work items queued on unbound workqueues - the number of these backing pools is dynamic. +BH workqueues use the same framework. However, as there can only be one +concurrent execution context, there's no need to worry about concurrency. +Each per-CPU BH worker pool contains only one pseudo worker which represents +the BH execution context. A BH workqueue can be considered a convenience +interface to softirq. + Subsystems and drivers can create and queue work items through special workqueue API functions as they see fit. They can influence some aspects of the way the work items are executed by setting flags on the @@ -106,7 +114,7 @@ unless specifically overridden, a work item of a bound workqueue will be queued on the worklist of either normal or highpri worker-pool that is associated to the CPU the issuer is running on. -For any worker pool implementation, managing the concurrency level +For any thread pool implementation, managing the concurrency level (how many execution contexts are active) is an important issue. cmwq tries to keep the concurrency at a minimal but sufficient level. Minimal to save resources and sufficient in that the system is used at @@ -164,6 +172,17 @@ resources, scheduled and executed. ``flags`` --------- +``WQ_BH`` + BH workqueues can be considered a convenience interface to softirq. BH + workqueues are always per-CPU and all BH work items are executed in the + queueing CPU's softirq context in the queueing order. + + All BH workqueues must have 0 ``max_active`` and ``WQ_HIGHPRI`` is the + only allowed additional flag. + + BH work items cannot sleep. All other features such as delayed queueing, + flushing and canceling are supported. + ``WQ_UNBOUND`` Work items queued to an unbound wq are served by the special worker-pools which host workers which are not bound to any diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 232baea90a1d..3ac044691246 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -353,6 +353,7 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; } * Documentation/core-api/workqueue.rst. */ enum wq_flags { + WQ_BH = 1 << 0, /* execute in bottom half (softirq) context */ WQ_UNBOUND = 1 << 1, /* not bound to any cpu */ WQ_FREEZABLE = 1 << 2, /* freeze during suspend */ WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */ @@ -392,6 +393,9 @@ enum wq_flags { __WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */ __WQ_LEGACY = 1 << 18, /* internal: create*_workqueue() */ __WQ_ORDERED_EXPLICIT = 1 << 19, /* internal: alloc_ordered_workqueue() */ + + /* BH wq only allows the following flags */ + __WQ_BH_ALLOWS = WQ_BH | WQ_HIGHPRI, }; enum wq_consts { @@ -434,6 +438,9 @@ enum wq_consts { * they are same as their non-power-efficient counterparts - e.g. * system_power_efficient_wq is identical to system_wq if * 'wq_power_efficient' is disabled. See WQ_POWER_EFFICIENT for more info. + * + * system_bh[_highpri]_wq are convenience interface to softirq. BH work items + * are executed in the queueing CPU's BH context in the queueing order. */ extern struct workqueue_struct *system_wq; extern struct workqueue_struct *system_highpri_wq; @@ -442,6 +449,8 @@ extern struct workqueue_struct *system_unbound_wq; extern struct workqueue_struct *system_freezable_wq; extern struct workqueue_struct *system_power_efficient_wq; extern struct workqueue_struct *system_freezable_power_efficient_wq; +extern struct workqueue_struct *system_bh_wq; +extern struct workqueue_struct *system_bh_highpri_wq; /** * alloc_workqueue - allocate a workqueue diff --git a/kernel/workqueue.c b/kernel/workqueue.c index f93554e479c4..9c972db910f6 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -72,8 +72,12 @@ enum worker_pool_flags { * Note that DISASSOCIATED should be flipped only while holding * wq_pool_attach_mutex to avoid changing binding state while * worker_attach_to_pool() is in progress. + * + * As there can only be one concurrent BH execution context per CPU, a + * BH pool is per-CPU and always DISASSOCIATED. */ - POOL_MANAGER_ACTIVE = 1 << 0, /* being managed */ + POOL_BH = 1 << 0, /* is a BH pool */ + POOL_MANAGER_ACTIVE = 1 << 1, /* being managed */ POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */ }; @@ -115,6 +119,14 @@ enum wq_internal_consts { WQ_NAME_LEN = 32, }; +/* + * We don't want to trap softirq for too long. See MAX_SOFTIRQ_TIME and + * MAX_SOFTIRQ_RESTART in kernel/softirq.c. These are macros because + * msecs_to_jiffies() can't be an initializer. + */ +#define BH_WORKER_JIFFIES msecs_to_jiffies(2) +#define BH_WORKER_RESTARTS 10 + /* * Structure fields follow one of the following exclusion rules. * @@ -441,8 +453,13 @@ static bool wq_debug_force_rr_cpu = false; #endif module_param_named(debug_force_rr_cpu, wq_debug_force_rr_cpu, bool, 0644); +/* the BH worker pools */ +static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS], + bh_worker_pools); + /* the per-cpu worker pools */ -static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS], cpu_worker_pools); +static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS], + cpu_worker_pools); static DEFINE_IDR(worker_pool_idr); /* PR: idr of all pools */ @@ -476,8 +493,13 @@ struct workqueue_struct *system_power_efficient_wq __ro_after_init; EXPORT_SYMBOL_GPL(system_power_efficient_wq); struct workqueue_struct *system_freezable_power_efficient_wq __ro_after_init; EXPORT_SYMBOL_GPL(system_freezable_power_efficient_wq); +struct workqueue_struct *system_bh_wq; +EXPORT_SYMBOL_GPL(system_bh_wq); +struct workqueue_struct *system_bh_highpri_wq; +EXPORT_SYMBOL_GPL(system_bh_highpri_wq); static int worker_thread(void *__worker); +static void bh_worker_taskletfn(struct tasklet_struct *tasklet); static void workqueue_sysfs_unregister(struct workqueue_struct *wq); static void show_pwq(struct pool_workqueue *pwq); static void show_one_worker_pool(struct worker_pool *pool); @@ -496,6 +518,11 @@ static void show_one_worker_pool(struct worker_pool *pool); !lockdep_is_held(&wq_pool_mutex), \ "RCU, wq->mutex or wq_pool_mutex should be held") +#define for_each_bh_worker_pool(pool, cpu) \ + for ((pool) = &per_cpu(bh_worker_pools, cpu)[0]; \ + (pool) < &per_cpu(bh_worker_pools, cpu)[NR_STD_WORKER_POOLS]; \ + (pool)++) + #define for_each_cpu_worker_pool(pool, cpu) \ for ((pool) = &per_cpu(cpu_worker_pools, cpu)[0]; \ (pool) < &per_cpu(cpu_worker_pools, cpu)[NR_STD_WORKER_POOLS]; \ @@ -1184,6 +1211,14 @@ static bool kick_pool(struct worker_pool *pool) if (!need_more_worker(pool) || !worker) return false; + if (pool->flags & POOL_BH) { + if (pool->attrs->nice == HIGHPRI_NICE_LEVEL) + tasklet_hi_schedule(&worker->bh_tasklet); + else + tasklet_schedule(&worker->bh_tasklet); + return true; + } + p = worker->task; #ifdef CONFIG_SMP @@ -1663,8 +1698,16 @@ static bool pwq_tryinc_nr_active(struct pool_workqueue *pwq, bool fill) lockdep_assert_held(&pool->lock); if (!nna) { - /* per-cpu workqueue, pwq->nr_active is sufficient */ - obtained = pwq->nr_active < READ_ONCE(wq->max_active); + /* + * BH workqueues always share a single execution context per CPU + * and don't impose any max_active limit, so tryinc always + * succeeds. For a per-cpu workqueue, checking pwq->nr_active is + * sufficient. + */ + if (wq->flags & WQ_BH) + obtained = true; + else + obtained = pwq->nr_active < READ_ONCE(wq->max_active); goto out; } @@ -2599,27 +2642,31 @@ static struct worker *create_worker(struct worker_pool *pool) worker->id = id; - if (pool->cpu >= 0) - snprintf(id_buf, sizeof(id_buf), "%d:%d%s", pool->cpu, id, - pool->attrs->nice < 0 ? "H" : ""); - else - snprintf(id_buf, sizeof(id_buf), "u%d:%d", pool->id, id); - - worker->task = kthread_create_on_node(worker_thread, worker, pool->node, - "kworker/%s", id_buf); - if (IS_ERR(worker->task)) { - if (PTR_ERR(worker->task) == -EINTR) { - pr_err("workqueue: Interrupted when creating a worker thread \"kworker/%s\"\n", - id_buf); - } else { - pr_err_once("workqueue: Failed to create a worker thread: %pe", - worker->task); + if (pool->flags & POOL_BH) { + tasklet_setup(&worker->bh_tasklet, bh_worker_taskletfn); + } else { + if (pool->cpu >= 0) + snprintf(id_buf, sizeof(id_buf), "%d:%d%s", pool->cpu, id, + pool->attrs->nice < 0 ? "H" : ""); + else + snprintf(id_buf, sizeof(id_buf), "u%d:%d", pool->id, id); + + worker->task = kthread_create_on_node(worker_thread, worker, + pool->node, "kworker/%s", id_buf); + if (IS_ERR(worker->task)) { + if (PTR_ERR(worker->task) == -EINTR) { + pr_err("workqueue: Interrupted when creating a worker thread \"kworker/%s\"\n", + id_buf); + } else { + pr_err_once("workqueue: Failed to create a worker thread: %pe", + worker->task); + } + goto fail; } - goto fail; - } - set_user_nice(worker->task, pool->attrs->nice); - kthread_bind_mask(worker->task, pool_allowed_cpus(pool)); + set_user_nice(worker->task, pool->attrs->nice); + kthread_bind_mask(worker->task, pool_allowed_cpus(pool)); + } /* successful, attach the worker to the pool */ worker_attach_to_pool(worker, pool); @@ -2635,7 +2682,8 @@ static struct worker *create_worker(struct worker_pool *pool) * check if not woken up soon. As kick_pool() is noop if @pool is empty, * wake it up explicitly. */ - wake_up_process(worker->task); + if (worker->task) + wake_up_process(worker->task); raw_spin_unlock_irq(&pool->lock); @@ -2977,7 +3025,8 @@ __acquires(&pool->lock) worker->current_work = work; worker->current_func = work->func; worker->current_pwq = pwq; - worker->current_at = worker->task->se.sum_exec_runtime; + if (worker->task) + worker->current_at = worker->task->se.sum_exec_runtime; work_data = *work_data_bits(work); worker->current_color = get_work_color(work_data); @@ -3075,7 +3124,8 @@ __acquires(&pool->lock) * stop_machine. At the same time, report a quiescent RCU state so * the same condition doesn't freeze RCU. */ - cond_resched(); + if (worker->task) + cond_resched(); raw_spin_lock_irq(&pool->lock); @@ -3358,6 +3408,43 @@ static int rescuer_thread(void *__rescuer) goto repeat; } +void bh_worker_taskletfn(struct tasklet_struct *tasklet) +{ + struct worker *worker = container_of(tasklet, struct worker, bh_tasklet); + struct worker_pool *pool = worker->pool; + int nr_restarts = BH_WORKER_RESTARTS; + unsigned long end = jiffies + BH_WORKER_JIFFIES; + + raw_spin_lock_irq(&pool->lock); + worker_leave_idle(worker); + + /* + * This function follows the structure of worker_thread(). See there for + * explanations on each step. + */ + if (!need_more_worker(pool)) + goto done; + + WARN_ON_ONCE(!list_empty(&worker->scheduled)); + worker_clr_flags(worker, WORKER_PREP | WORKER_REBOUND); + + do { + struct work_struct *work = + list_first_entry(&pool->worklist, + struct work_struct, entry); + + if (assign_work(work, worker, NULL)) + process_scheduled_works(worker); + } while (keep_working(pool) && + --nr_restarts && time_before(jiffies, end)); + + worker_set_flags(worker, WORKER_PREP); +done: + worker_enter_idle(worker); + kick_pool(pool); + raw_spin_unlock_irq(&pool->lock); +} + /** * check_flush_dependency - check for flush dependency sanity * @target_wq: workqueue being flushed @@ -3430,6 +3517,7 @@ static void insert_wq_barrier(struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker) { + static __maybe_unused struct lock_class_key bh_key, thr_key; unsigned int work_flags = 0; unsigned int work_color; struct list_head *head; @@ -3439,8 +3527,13 @@ static void insert_wq_barrier(struct pool_workqueue *pwq, * as we know for sure that this will not trigger any of the * checks and call back into the fixup functions where we * might deadlock. + * + * BH and threaded workqueues need separate lockdep keys to avoid + * spuriously triggering "inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} + * usage". */ - INIT_WORK_ONSTACK(&barr->work, wq_barrier_func); + INIT_WORK_ONSTACK_KEY(&barr->work, wq_barrier_func, + (pwq->wq->flags & WQ_BH) ? &bh_key : &thr_key); __set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(&barr->work)); init_completion_map(&barr->done, &target->lockdep_map); @@ -3546,15 +3639,31 @@ static bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq, static void touch_wq_lockdep_map(struct workqueue_struct *wq) { +#ifdef CONFIG_LOCKDEP + if (wq->flags & WQ_BH) + local_bh_disable(); + lock_map_acquire(&wq->lockdep_map); lock_map_release(&wq->lockdep_map); + + if (wq->flags & WQ_BH) + local_bh_enable(); +#endif } static void touch_work_lockdep_map(struct work_struct *work, struct workqueue_struct *wq) { +#ifdef CONFIG_LOCKDEP + if (wq->flags & WQ_BH) + local_bh_disable(); + lock_map_acquire(&work->lockdep_map); lock_map_release(&work->lockdep_map); + + if (wq->flags & WQ_BH) + local_bh_enable(); +#endif } /** @@ -5007,10 +5116,17 @@ static int alloc_and_link_pwqs(struct workqueue_struct *wq) if (!(wq->flags & WQ_UNBOUND)) { for_each_possible_cpu(cpu) { - struct pool_workqueue **pwq_p = - per_cpu_ptr(wq->cpu_pwq, cpu); - struct worker_pool *pool = - &(per_cpu_ptr(cpu_worker_pools, cpu)[highpri]); + struct pool_workqueue **pwq_p; + struct worker_pool __percpu *pools; + struct worker_pool *pool; + + if (wq->flags & WQ_BH) + pools = bh_worker_pools; + else + pools = cpu_worker_pools; + + pool = &(per_cpu_ptr(pools, cpu)[highpri]); + pwq_p = per_cpu_ptr(wq->cpu_pwq, cpu); *pwq_p = kmem_cache_alloc_node(pwq_cache, GFP_KERNEL, pool->node); @@ -5185,6 +5301,13 @@ struct workqueue_struct *alloc_workqueue(const char *fmt, size_t wq_size; int name_len; + if (flags & WQ_BH) { + if (WARN_ON_ONCE(flags & ~__WQ_BH_ALLOWS)) + return NULL; + if (WARN_ON_ONCE(max_active)) + return NULL; + } + /* * Unbound && max_active == 1 used to imply ordered, which is no longer * the case on many machines due to per-pod pools. While @@ -5404,6 +5527,9 @@ EXPORT_SYMBOL_GPL(destroy_workqueue); */ void workqueue_set_max_active(struct workqueue_struct *wq, int max_active) { + /* max_active doesn't mean anything for BH workqueues */ + if (WARN_ON(wq->flags & WQ_BH)) + return; /* disallow meddling with max_active for ordered workqueues */ if (WARN_ON(wq->flags & __WQ_ORDERED_EXPLICIT)) return; @@ -5605,7 +5731,12 @@ static void pr_cont_pool_info(struct worker_pool *pool) pr_cont(" cpus=%*pbl", nr_cpumask_bits, pool->attrs->cpumask); if (pool->node != NUMA_NO_NODE) pr_cont(" node=%d", pool->node); - pr_cont(" flags=0x%x nice=%d", pool->flags, pool->attrs->nice); + pr_cont(" flags=0x%x", pool->flags); + if (pool->flags & POOL_BH) + pr_cont(" bh%s", + pool->attrs->nice == HIGHPRI_NICE_LEVEL ? "-hi" : ""); + else + pr_cont(" nice=%d", pool->attrs->nice); } struct pr_cont_work_struct { @@ -6078,13 +6209,15 @@ int workqueue_online_cpu(unsigned int cpu) mutex_lock(&wq_pool_mutex); for_each_pool(pool, pi) { - mutex_lock(&wq_pool_attach_mutex); + /* BH pools aren't affected by hotplug */ + if (pool->flags & POOL_BH) + continue; + mutex_lock(&wq_pool_attach_mutex); if (pool->cpu == cpu) rebind_workers(pool); else if (pool->cpu < 0) restore_unbound_workers_cpumask(pool, cpu); - mutex_unlock(&wq_pool_attach_mutex); } @@ -7206,10 +7339,16 @@ void __init workqueue_init_early(void) pt->pod_node[0] = NUMA_NO_NODE; pt->cpu_pod[0] = 0; - /* initialize CPU pools */ + /* initialize BH and CPU pools */ for_each_possible_cpu(cpu) { struct worker_pool *pool; + i = 0; + for_each_bh_worker_pool(pool, cpu) { + init_cpu_worker_pool(pool, cpu, std_nice[i++]); + pool->flags |= POOL_BH; + } + i = 0; for_each_cpu_worker_pool(pool, cpu) init_cpu_worker_pool(pool, cpu, std_nice[i++]); @@ -7245,10 +7384,14 @@ void __init workqueue_init_early(void) system_freezable_power_efficient_wq = alloc_workqueue("events_freezable_pwr_efficient", WQ_FREEZABLE | WQ_POWER_EFFICIENT, 0); + system_bh_wq = alloc_workqueue("events_bh", WQ_BH, 0); + system_bh_highpri_wq = alloc_workqueue("events_bh_highpri", + WQ_BH | WQ_HIGHPRI, 0); BUG_ON(!system_wq || !system_highpri_wq || !system_long_wq || !system_unbound_wq || !system_freezable_wq || !system_power_efficient_wq || - !system_freezable_power_efficient_wq); + !system_freezable_power_efficient_wq || + !system_bh_wq || !system_bh_highpri_wq); } static void __init wq_cpu_intensive_thresh_init(void) @@ -7314,9 +7457,10 @@ void __init workqueue_init(void) * up. Also, create a rescuer for workqueues that requested it. */ for_each_possible_cpu(cpu) { - for_each_cpu_worker_pool(pool, cpu) { + for_each_bh_worker_pool(pool, cpu) + pool->node = cpu_to_node(cpu); + for_each_cpu_worker_pool(pool, cpu) pool->node = cpu_to_node(cpu); - } } list_for_each_entry(wq, &workqueues, list) { @@ -7327,7 +7471,16 @@ void __init workqueue_init(void) mutex_unlock(&wq_pool_mutex); - /* create the initial workers */ + /* + * Create the initial workers. A BH pool has one pseudo worker that + * represents the shared BH execution context and thus doesn't get + * affected by hotplug events. Create the BH pseudo workers for all + * possible CPUs here. + */ + for_each_possible_cpu(cpu) + for_each_bh_worker_pool(pool, cpu) + BUG_ON(!create_worker(pool)); + for_each_online_cpu(cpu) { for_each_cpu_worker_pool(pool, cpu) { pool->flags &= ~POOL_DISASSOCIATED; diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h index f6275944ada7..8da306b76ec9 100644 --- a/kernel/workqueue_internal.h +++ b/kernel/workqueue_internal.h @@ -10,6 +10,7 @@ #include #include +#include #include struct worker_pool; @@ -42,6 +43,8 @@ struct worker { struct list_head scheduled; /* L: scheduled works */ struct task_struct *task; /* I: worker task */ + struct tasklet_struct bh_tasklet; /* I: tasklet for bh pool */ + struct worker_pool *pool; /* A: the associated pool */ /* L: for rescuers */ struct list_head node; /* A: anchored at pool->workers */ diff --git a/tools/workqueue/wq_dump.py b/tools/workqueue/wq_dump.py index bd381511bd9a..d29b918306b4 100644 --- a/tools/workqueue/wq_dump.py +++ b/tools/workqueue/wq_dump.py @@ -79,7 +79,9 @@ args = parser.parse_args() wq_type_len = 9 def wq_type_str(wq): - if wq.flags & WQ_UNBOUND: + if wq.flags & WQ_BH: + return f'{"bh":{wq_type_len}}' + elif wq.flags & WQ_UNBOUND: if wq.flags & WQ_ORDERED: return f'{"ordered":{wq_type_len}}' else: @@ -97,6 +99,7 @@ wq_pod_types = prog['wq_pod_types'] wq_affn_dfl = prog['wq_affn_dfl'] wq_affn_names = prog['wq_affn_names'] +WQ_BH = prog['WQ_BH'] WQ_UNBOUND = prog['WQ_UNBOUND'] WQ_ORDERED = prog['__WQ_ORDERED'] WQ_MEM_RECLAIM = prog['WQ_MEM_RECLAIM'] @@ -107,6 +110,8 @@ WQ_AFFN_CACHE = prog['WQ_AFFN_CACHE'] WQ_AFFN_NUMA = prog['WQ_AFFN_NUMA'] WQ_AFFN_SYSTEM = prog['WQ_AFFN_SYSTEM'] +POOL_BH = prog['POOL_BH'] + WQ_NAME_LEN = prog['WQ_NAME_LEN'].value_() cpumask_str_len = len(cpumask_str(wq_unbound_cpumask)) @@ -151,10 +156,12 @@ max_ref_len = 0 for pi, pool in idr_for_each(worker_pool_idr): pool = drgn.Object(prog, 'struct worker_pool', address=pool) - print(f'pool[{pi:0{max_pool_id_len}}] ref={pool.refcnt.value_():{max_ref_len}} nice={pool.attrs.nice.value_():3} ', end='') + print(f'pool[{pi:0{max_pool_id_len}}] flags=0x{pool.flags.value_():02x} ref={pool.refcnt.value_():{max_ref_len}} nice={pool.attrs.nice.value_():3} ', end='') print(f'idle/workers={pool.nr_idle.value_():3}/{pool.nr_workers.value_():3} ', end='') if pool.cpu >= 0: print(f'cpu={pool.cpu.value_():3}', end='') + if pool.flags & POOL_BH: + print(' bh', end='') else: print(f'cpus={cpumask_str(pool.attrs.cpumask)}', end='') print(f' pod_cpus={cpumask_str(pool.attrs.__pod_cpumask)}', end='') From patchwork Tue Jan 30 09:11:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537041 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAD49657C0 for ; Tue, 30 Jan 2024 09:13:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605993; cv=none; b=DkWSjKHfDmGExTdNAaOF2nqqBgWPJz6KPy0Cs/1W8qGvR3LDokocjn8L7xzL8jEs4IgsLYsbHhJU46ujeSU0DVBG0+3r4ANz8x/bikUkT6RxZPeBK8wd4xbkk4upwxZLNhzfIShbl51G8VHPDrYTCou8chfnTxgkHNIwn7rKjqM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605993; c=relaxed/simple; bh=phi5T2izEMsiOsO8bLtfMS+kNUjV4V4DeiWbRLOq9mc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aqHcNTEQgCyjocXxz8CAttyG+WVBrGUN9PqzcLrUizFqSgXLyJ3lOKJq+l2trHWWKj21VcIhP48Nbim2McV2Pv8WbFKHuDzkWeXNM8S58EP26zenmPMiQwAjfMrq0KSUv+dLMcZ2qn2YPOJH1Gj7Z0Ohz+EtuEON0l9ulBpnCoQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=M8XYCXsN; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="M8XYCXsN" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-1d8ef977f1eso8944965ad.0 for ; Tue, 30 Jan 2024 01:13:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605991; x=1707210791; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=IweQZX1rrzr6gibPdXDs6vE0Yntn8jSnRWjOw+ZAuiQ=; b=M8XYCXsN/IySZyAyLHjNY7UB6Gu1Lt4xemc4LxfMX9MkpGpqHJo/22GrD4LVhsY8n8 KmoAxdtdetSdMIqR9aIQ3hvTJtkcslUUtB5ics5Rh7nW7tQyMZpQ3awn47OMfB5phJxV 4qBuL6dY1uKyPvVo2GBntIL7qPLilLUiqNepFYDSpwXA3E2ZBDmpmmd1enVVPhT4dg6x suXiZGrf+ZSOJVSw1J82+++bZhFjWXYM3EXucReyMVcpBdnCK49MaOUoiFO9CjREZmXt 0j/6xmAnIkhABX1/dVvCPmd6yh/L1UDt7rfVX/luuOwhn5XSyTshETYo+k0dS1V2wifK hfQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605991; x=1707210791; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=IweQZX1rrzr6gibPdXDs6vE0Yntn8jSnRWjOw+ZAuiQ=; b=jlS2b+wPlrnwxowIEMH7vavhGFSwDkn87RTIMNZtdnutIqlJ8Kezs1HLTDG0mJ0tFI 0T1JWB7gsypvx78S/D6cJbVHDwDv0WGiwjccOzEpg8YIXwzBHd2mud5yVKEqOMrniwMF daL7afQbOxJApW8id/M9aGGdx9wCjjK74m6fTXuAZ9jmG6a/NniJfncpXD6SaOUQBnw2 i25x7HnCgrkcZ2kKJPVKjMTI21Zn0wKHlMc5hs4rZ219C/qXa8OQmJD+PKnWxg6hBpk2 feP9A9zeRipfKRk+aWoxtF+bLYD15flhdPjme+Kq8NztaSsmGZaLeC0TeB/e1n07TTiB /tmA== X-Gm-Message-State: AOJu0YxdC/AV2fe3o9aZtbsi3ag9P1nmlnd0Oz9Goqk8brzyOTNpNoMF Tvnls1b+SDTN0fzkSElp1Xeip9hrv/nyQrSmKaaXoyWvdGD4IOjT X-Google-Smtp-Source: AGHT+IGYhVaGIj5DWHqtUhr2Kt2mmgKziOlN4O7Skqws3YGQXAGVxoBC24HM0zRZrGiTaUbKWwB5Eg== X-Received: by 2002:a17:902:fc8d:b0:1d6:fa0b:4949 with SMTP id mf13-20020a170902fc8d00b001d6fa0b4949mr5439018plb.38.1706605991070; Tue, 30 Jan 2024 01:13:11 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id u17-20020a170902a61100b001d6f8b31ddcsm6703203plq.3.2024.01.30.01.13.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:10 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo , Arjan van de Ven Subject: [PATCH 4/8] backtracetest: Convert from tasklet to BH workqueue Date: Mon, 29 Jan 2024 23:11:51 -1000 Message-ID: <20240130091300.2968534-5-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only generic interface to execute asynchronously in the BH context is tasklet; however, it's marked deprecated and has some design flaws. To replace tasklets, BH workqueue support was recently added. A BH workqueue behaves similarly to regular workqueues except that the queued work items are executed in the BH context. This patch converts backtracetest from tasklet to BH workqueue. - Replace "irq" with "bh" in names and message to better reflect what's happening. - Replace completion usage with a flush_work() call. Signed-off-by: Tejun Heo Cc: Arjan van de Ven --- kernel/backtracetest.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/kernel/backtracetest.c b/kernel/backtracetest.c index 370217dd7e39..a4181234232b 100644 --- a/kernel/backtracetest.c +++ b/kernel/backtracetest.c @@ -21,24 +21,20 @@ static void backtrace_test_normal(void) dump_stack(); } -static DECLARE_COMPLETION(backtrace_work); - -static void backtrace_test_irq_callback(unsigned long data) +static void backtrace_test_bh_workfn(struct work_struct *work) { dump_stack(); - complete(&backtrace_work); } -static DECLARE_TASKLET_OLD(backtrace_tasklet, &backtrace_test_irq_callback); +static DECLARE_WORK(backtrace_bh_work, &backtrace_test_bh_workfn); -static void backtrace_test_irq(void) +static void backtrace_test_bh(void) { - pr_info("Testing a backtrace from irq context.\n"); + pr_info("Testing a backtrace from BH context.\n"); pr_info("The following trace is a kernel self test and not a bug!\n"); - init_completion(&backtrace_work); - tasklet_schedule(&backtrace_tasklet); - wait_for_completion(&backtrace_work); + queue_work(system_bh_wq, &backtrace_bh_work); + flush_work(&backtrace_bh_work); } #ifdef CONFIG_STACKTRACE @@ -65,7 +61,7 @@ static int backtrace_regression_test(void) pr_info("====[ backtrace testing ]===========\n"); backtrace_test_normal(); - backtrace_test_irq(); + backtrace_test_bh(); backtrace_test_saved(); pr_info("====[ end of backtrace testing ]====\n"); From patchwork Tue Jan 30 09:11:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537042 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59CC365BD6 for ; Tue, 30 Jan 2024 09:13:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605995; cv=none; b=Bn9M9scHCZ1hpkriHH7KEXYmWMnrMT9E6i0ZpcS3cYBZCvvelrxl1PigdE85yuhHWZU+rOnGwE+EvjIillsRgvp48CbGj4Decr/SYjMh1uBbdM1hJ8DU5blO/ppWMh2D6zKSK9XXlyHsxJS8KV8FX9j7raw+G9yn0unSYNXLV6w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605995; c=relaxed/simple; bh=iJp3fdu0fy1eOcOxILeo5Du4UmNzi0oWDAcs0eGthNE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IdYvenNs4w+3kZnwM/TTDyAhJcblUpzc2bA5YlGaRkad/sFCmXHds1qb8GFgHo/o9gkuNjV+GlsCwRJtiGnnonSoejof3SSeNjKMhIbZFZmK7k8oAd1sNXW1XDunVjmkbkXpaINWh2fglm5MnXDPT5kpNIaSMRy4REXq7Sq6AgA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GbGh7etk; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GbGh7etk" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-6ddc2a78829so1488059b3a.3 for ; Tue, 30 Jan 2024 01:13:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605993; x=1707210793; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=/gc+JDMFysloX3bLK8Ab7sB4XneQUnO5q2M+huTrtLI=; b=GbGh7etkth29pnebXmAqx6kzqzn8TwXqdJDYCSLTfqLsqfhul8C2ZOWyp/Kg6ZivLt NygP1HAhO7/sHghwnIElxSP4oF1C/uIoiWHm4EmuKA9CJCYOcYdyb/uwK8jDMGMwHVjb P1TXnJtrciLQo5IUwGdK1BKD6+x2REQwJlDNHInslAZVGl6PKX0ZyLuBuHJdZ1WsLURN qTIUvevj2WrcWFAp0hrwieoBeTltEhUgkoRCVeSXKHxDM61IWy5B0DIcCzEbG3c/64Yo 7ZVxKjZFETt3SRGfDI043e7rokQTjEKe5lY2q6BEWfIsDhUUILjphOBuGJsZS0cUN2mm bTXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605993; x=1707210793; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=/gc+JDMFysloX3bLK8Ab7sB4XneQUnO5q2M+huTrtLI=; b=I3uwuKgHlGI5Ogp0PKG42X0roOtDSyqwffRBCJHepnsKpFdW0uTJzwaI3s9rJRYjVj LSgw+NyJmvcs2OzqDvpTNURdSOEMzDcXsAneQp92+aWRj5d6D4IB0OF+iocJV2jccXlK K5fJdMiIJavk2fYkALyqLC7jPOpVlj9V8Vr6rlR8wkHTUgTqlGEoNf3aOLzGXyid+Fue YDaotdzJE8E9lXnLUsunP2zf3dXudJP6dnQiNPdnbn0nFGt49WFhB/pq0FTNRlhXP1tn fAxdQX8DcNhU72e/WNFXCwG2jHjtoNlHUWdq2FcPrTHNXf976BE6Yk+1KbqL6JOgXvMu YGZw== X-Gm-Message-State: AOJu0YyjKxFdRI5qQuCblcy/UGW4o+UUoJ+cxKlzcNL9D9Mj0v10rhWb nMTAa5vP0wwtatbZHaryOVaD3/1ycCqJZP+qqD6ff7UQrIdBEdLp X-Google-Smtp-Source: AGHT+IEMPeN+mDQxL1EgLPRL8abGu/7KopEZVJSkRj9KwItNcvpwt/WNottFX3ZGHhox/BltgpoPPA== X-Received: by 2002:a05:6a00:cce:b0:6de:2f36:8284 with SMTP id b14-20020a056a000cce00b006de2f368284mr3510669pfv.3.1706605992637; Tue, 30 Jan 2024 01:13:12 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id lo18-20020a056a003d1200b006dde0b7d633sm7301241pfb.77.2024.01.30.01.13.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:12 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo , Greg Kroah-Hartman , Alan Stern , linux-usb@vger.kernel.org Subject: [PATCH 5/8] usb: core: hcd: Convert from tasklet to BH workqueue Date: Mon, 29 Jan 2024 23:11:52 -1000 Message-ID: <20240130091300.2968534-6-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only generic interface to execute asynchronously in the BH context is tasklet; however, it's marked deprecated and has some design flaws. To replace tasklets, BH workqueue support was recently added. A BH workqueue behaves similarly to regular workqueues except that the queued work items are executed in the BH context. This patch converts usb hcd from tasklet to BH workqueue. Signed-off-by: Tejun Heo Cc: Greg Kroah-Hartman Cc: Alan Stern Cc: linux-usb@vger.kernel.org Acked-by: Greg Kroah-Hartman Acked-by: Davidlohr Bueso --- drivers/usb/core/hcd.c | 23 ++++++++++++----------- include/linux/usb/hcd.h | 2 +- 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c index 12b6dfeaf658..edf74458474a 100644 --- a/drivers/usb/core/hcd.c +++ b/drivers/usb/core/hcd.c @@ -1664,9 +1664,10 @@ static void __usb_hcd_giveback_urb(struct urb *urb) usb_put_urb(urb); } -static void usb_giveback_urb_bh(struct tasklet_struct *t) +static void usb_giveback_urb_bh(struct work_struct *work) { - struct giveback_urb_bh *bh = from_tasklet(bh, t, bh); + struct giveback_urb_bh *bh = + container_of(work, struct giveback_urb_bh, bh); struct list_head local_list; spin_lock_irq(&bh->lock); @@ -1691,9 +1692,9 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t) spin_lock_irq(&bh->lock); if (!list_empty(&bh->head)) { if (bh->high_prio) - tasklet_hi_schedule(&bh->bh); + queue_work(system_bh_highpri_wq, &bh->bh); else - tasklet_schedule(&bh->bh); + queue_work(system_bh_wq, &bh->bh); } bh->running = false; spin_unlock_irq(&bh->lock); @@ -1706,7 +1707,7 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t) * @status: completion status code for the URB. * * Context: atomic. The completion callback is invoked in caller's context. - * For HCDs with HCD_BH flag set, the completion callback is invoked in tasklet + * For HCDs with HCD_BH flag set, the completion callback is invoked in BH * context (except for URBs submitted to the root hub which always complete in * caller's context). * @@ -1725,7 +1726,7 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status) struct giveback_urb_bh *bh; bool running; - /* pass status to tasklet via unlinked */ + /* pass status to BH via unlinked */ if (likely(!urb->unlinked)) urb->unlinked = status; @@ -1747,9 +1748,9 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status) if (running) ; else if (bh->high_prio) - tasklet_hi_schedule(&bh->bh); + queue_work(system_bh_highpri_wq, &bh->bh); else - tasklet_schedule(&bh->bh); + queue_work(system_bh_wq, &bh->bh); } EXPORT_SYMBOL_GPL(usb_hcd_giveback_urb); @@ -2540,7 +2541,7 @@ static void init_giveback_urb_bh(struct giveback_urb_bh *bh) spin_lock_init(&bh->lock); INIT_LIST_HEAD(&bh->head); - tasklet_setup(&bh->bh, usb_giveback_urb_bh); + INIT_WORK(&bh->bh, usb_giveback_urb_bh); } struct usb_hcd *__usb_create_hcd(const struct hc_driver *driver, @@ -2926,7 +2927,7 @@ int usb_add_hcd(struct usb_hcd *hcd, && device_can_wakeup(&hcd->self.root_hub->dev)) dev_dbg(hcd->self.controller, "supports USB remote wakeup\n"); - /* initialize tasklets */ + /* initialize BHs */ init_giveback_urb_bh(&hcd->high_prio_bh); hcd->high_prio_bh.high_prio = true; init_giveback_urb_bh(&hcd->low_prio_bh); @@ -3036,7 +3037,7 @@ void usb_remove_hcd(struct usb_hcd *hcd) mutex_unlock(&usb_bus_idr_lock); /* - * tasklet_kill() isn't needed here because: + * flush_work() isn't needed here because: * - driver's disconnect() called from usb_disconnect() should * make sure its URBs are completed during the disconnect() * callback diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h index 00724b4f6e12..f698aac71de3 100644 --- a/include/linux/usb/hcd.h +++ b/include/linux/usb/hcd.h @@ -55,7 +55,7 @@ struct giveback_urb_bh { bool high_prio; spinlock_t lock; struct list_head head; - struct tasklet_struct bh; + struct work_struct bh; struct usb_host_endpoint *completing_ep; }; From patchwork Tue Jan 30 09:11:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537043 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE25366B2A for ; Tue, 30 Jan 2024 09:13:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605996; cv=none; b=E8kiXpYOFKbgo4rqEAyNCIoJvJzUPmAEsHy+FSYIjcbNJtnXlwr5wmrSKT9VPB8j3j3GgZD9N+ofPwXy/mYVHLw5BNWtwbviKpdZx0v53DtqvIYScuuU10uKirVXYA+YfK/CQ9OpW9xJSEEkbVT+ieuPvR049G5MNj1MjkfwO4A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605996; c=relaxed/simple; bh=R1ACoB28rY6p8+XZEPrUMweeKHn7QHqywci7Ubfb9rE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jxvVXGZcngXt9KPjIm2Di0OlAEu6CvNdRvYbmZS02eJkxQMf5mlm23kmAqVn5AQpSxZ5sRIbo0qjNNDQdd6FOXtlsB/3vxn0Td5V/AU/hsOTRoudZNgtzGQbiQ0I+p3Ro0di5OHlC1lFn8+ESJkNRkb6vM24aqwBRTq541NemPw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SudwiFGI; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SudwiFGI" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-29026523507so2962697a91.0 for ; Tue, 30 Jan 2024 01:13:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605994; x=1707210794; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=kLAiJpm0LtH5vcnMce7MYOvEWmNX9nkZqVGOHKY+ODc=; b=SudwiFGI+OJlfPvawqQDjkTjWJI9QhZnBG78mkJrlyLRG53MMinnDhhVrgzX4Dnf67 Y7YFDNtvjP9Z0/4E8eQGr9LsTrppvk0tKpRW/lRnAMOSQXF80Nn6115kkyiefJI/YsT3 JUi5o2Q2kW9M7qfGVysj/Pa0XYIR6df1jmAVVnT5a0bn3Gekce4RUzKm0NRAIaOjlI29 MgUe1bKl+9FNCCNn7zxaeeamKGlRGSI1XI3gDzUohnOKguV1UZRVd+44+qt1XkfB4u88 +wyGDFlDUfoIP1MQjrGlU0A9UbO0BvzFZCExxuUZj456kt9aEZDgbt33cJOs59rT7d9O IzHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605994; x=1707210794; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kLAiJpm0LtH5vcnMce7MYOvEWmNX9nkZqVGOHKY+ODc=; b=ebxWLa0vzRJt/3R22QwG4vIUB4EcdjeQpeKWTVk+TrpLssRAJyL3kYmEfiA9m5Z7cM JUaX99ymBadqn3rHGkN8EktHvKL2y53R2pYTXArw9msz0dW/B6Kk5PtHJbjadHUsIyIA +B6T9fXTVbPItwC82ZfUZftMb70Zpf3+plMxUngpD7E2+MkGYk/n5z6RocEOGKYNiUHC OW9KOQcS9UY1ZbkigS/ZSBFjwW5wkvadqRs5H1GfiNOtZMYAsc2xgxq105zX47OSi03F thx/sVBYzyZAEwVV/ay04SIflBe8UX97eZsJyM/mW2tJjbMtJFtMBR8CxhuTTBYqFwfI fWSg== X-Gm-Message-State: AOJu0Yw1nOfqOY9fVx5He53VPvo1afe31Eh+V9XISiRyNRm7dkragz6/ Ejp3ypNqJSMmrnhfzC3EktWJZUDmzZFMsazKWUANh6u59kMTdFzJ X-Google-Smtp-Source: AGHT+IF3b/+kojapfeRuhsR/j4T111w0CzhWh5Et0GGp61tD12NEkGqgoWCAncjIhdgx+EW/PRYQVQ== X-Received: by 2002:a17:90b:610:b0:292:6b60:ef6e with SMTP id gb16-20020a17090b061000b002926b60ef6emr5168649pjb.46.1706605994100; Tue, 30 Jan 2024 01:13:14 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id bg3-20020a17090b0d8300b002949d9767acsm7146273pjb.4.2024.01.30.01.13.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:13 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo , Eric Dumazet , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni Subject: [PATCH 6/8] net: tcp: tsq: Convert from tasklet to BH workqueue Date: Mon, 29 Jan 2024 23:11:53 -1000 Message-ID: <20240130091300.2968534-7-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only generic interface to execute asynchronously in the BH context is tasklet; however, it's marked deprecated and has some design flaws. To replace tasklets, BH workqueue support was recently added. A BH workqueue behaves similarly to regular workqueues except that the queued work items are executed in the BH context. This patch converts TCP Small Queues implementation from tasklet to BH workqueue. Semantically, this is an equivalent conversion and there shouldn't be any user-visible behavior changes. While workqueue's queueing and execution paths are a bit heavier than tasklet's, unless the work item is being queued every packet, the difference hopefully shouldn't matter. My experience with the networking stack is very limited and this patch definitely needs attention from someone who actually understands networking. Signed-off-by: Tejun Heo Cc: Eric Dumazet Cc: "David S. Miller" (maintainer:NETWORKING [IPv4/IPv6]) Cc: David Ahern Cc: Jakub Kicinski (maintainer:NETWORKING [GENERAL]) Cc: Paolo Abeni Cc: netdev@vger.kernel.org (open list:NETWORKING [TCP]) --- include/net/tcp.h | 2 +- net/ipv4/tcp.c | 2 +- net/ipv4/tcp_output.c | 36 ++++++++++++++++++------------------ 3 files changed, 20 insertions(+), 20 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index dd78a1181031..89f3702be47a 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -324,7 +324,7 @@ extern struct proto tcp_prot; #define TCP_DEC_STATS(net, field) SNMP_DEC_STATS((net)->mib.tcp_statistics, field) #define TCP_ADD_STATS(net, field, val) SNMP_ADD_STATS((net)->mib.tcp_statistics, field, val) -void tcp_tasklet_init(void); +void tcp_tsq_work_init(void); int tcp_v4_err(struct sk_buff *skb, u32); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 1baa484d2190..d085ee5642fe 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4772,6 +4772,6 @@ void __init tcp_init(void) tcp_v4_init(); tcp_metrics_init(); BUG_ON(tcp_register_congestion_control(&tcp_reno) != 0); - tcp_tasklet_init(); + tcp_tsq_work_init(); mptcp_init(); } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index e3167ad96567..d11be6eebb6e 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1049,15 +1049,15 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb * needs to be reallocated in a driver. * The invariant being skb->truesize subtracted from sk->sk_wmem_alloc * - * Since transmit from skb destructor is forbidden, we use a tasklet + * Since transmit from skb destructor is forbidden, we use a BH work item * to process all sockets that eventually need to send more skbs. - * We use one tasklet per cpu, with its own queue of sockets. + * We use one work item per cpu, with its own queue of sockets. */ -struct tsq_tasklet { - struct tasklet_struct tasklet; +struct tsq_work { + struct work_struct work; struct list_head head; /* queue of tcp sockets */ }; -static DEFINE_PER_CPU(struct tsq_tasklet, tsq_tasklet); +static DEFINE_PER_CPU(struct tsq_work, tsq_work); static void tcp_tsq_write(struct sock *sk) { @@ -1087,14 +1087,14 @@ static void tcp_tsq_handler(struct sock *sk) bh_unlock_sock(sk); } /* - * One tasklet per cpu tries to send more skbs. - * We run in tasklet context but need to disable irqs when + * One work item per cpu tries to send more skbs. + * We run in BH context but need to disable irqs when * transferring tsq->head because tcp_wfree() might * interrupt us (non NAPI drivers) */ -static void tcp_tasklet_func(struct tasklet_struct *t) +static void tcp_tsq_workfn(struct work_struct *work) { - struct tsq_tasklet *tsq = from_tasklet(tsq, t, tasklet); + struct tsq_work *tsq = container_of(work, struct tsq_work, work); LIST_HEAD(list); unsigned long flags; struct list_head *q, *n; @@ -1164,15 +1164,15 @@ void tcp_release_cb(struct sock *sk) } EXPORT_SYMBOL(tcp_release_cb); -void __init tcp_tasklet_init(void) +void __init tcp_tsq_work_init(void) { int i; for_each_possible_cpu(i) { - struct tsq_tasklet *tsq = &per_cpu(tsq_tasklet, i); + struct tsq_work *tsq = &per_cpu(tsq_work, i); INIT_LIST_HEAD(&tsq->head); - tasklet_setup(&tsq->tasklet, tcp_tasklet_func); + INIT_WORK(&tsq->work, tcp_tsq_workfn); } } @@ -1186,11 +1186,11 @@ void tcp_wfree(struct sk_buff *skb) struct sock *sk = skb->sk; struct tcp_sock *tp = tcp_sk(sk); unsigned long flags, nval, oval; - struct tsq_tasklet *tsq; + struct tsq_work *tsq; bool empty; /* Keep one reference on sk_wmem_alloc. - * Will be released by sk_free() from here or tcp_tasklet_func() + * Will be released by sk_free() from here or tcp_tsq_workfn() */ WARN_ON(refcount_sub_and_test(skb->truesize - 1, &sk->sk_wmem_alloc)); @@ -1212,13 +1212,13 @@ void tcp_wfree(struct sk_buff *skb) nval = (oval & ~TSQF_THROTTLED) | TSQF_QUEUED; } while (!try_cmpxchg(&sk->sk_tsq_flags, &oval, nval)); - /* queue this socket to tasklet queue */ + /* queue this socket to BH workqueue */ local_irq_save(flags); - tsq = this_cpu_ptr(&tsq_tasklet); + tsq = this_cpu_ptr(&tsq_work); empty = list_empty(&tsq->head); list_add(&tp->tsq_node, &tsq->head); if (empty) - tasklet_schedule(&tsq->tasklet); + queue_work(system_bh_wq, &tsq->work); local_irq_restore(flags); return; out: @@ -2623,7 +2623,7 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb, if (refcount_read(&sk->sk_wmem_alloc) > limit) { /* Always send skb if rtx queue is empty or has one skb. * No need to wait for TX completion to call us back, - * after softirq/tasklet schedule. + * after softirq schedule. * This helps when TX completions are delayed too much. */ if (tcp_rtx_queue_empty_or_single_skb(sk)) From patchwork Tue Jan 30 09:11:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537044 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E49666B51 for ; Tue, 30 Jan 2024 09:13:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605997; cv=none; b=Wz9ktWNqqsCZg5eTYM8VdheLJhaUZjix7Eq2CnXrlGly9e1lpGfmTGQYoNQmL6lcPZrb4U45LGXUDp6Yfe3Puip3u4yXvRoFFeqy4tYEgPI6lP9WJ+xH8iNbOmpowGoRpdcp/5hxXtnHtesjy/tCjP8ltiUSTwzetW5JKBHC74M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605997; c=relaxed/simple; bh=SZMS+fgtkFfSzguhmeH2OmNAK1/PWSF2Bv2J0fmCKdg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PRn0Zu0xT8E/8xix9a160BJKVWUYExcGO33azdOm3OSvKAeoLX1p2z/+wMlm4heWjyJXYS4EFqXPy0pIf2EqwgRTpGr7Qk7054XVt6Wx//97UMKqP/D9GPV5Juh51RvPDMEnY+is6GVUGkBTOXMFntC1JaXYg9l/huEFdN7383E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=KvCs9Pp0; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KvCs9Pp0" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1d8e7df6abcso10367355ad.1 for ; Tue, 30 Jan 2024 01:13:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605995; x=1707210795; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=EEGg3+Udn4gaw+4rcyLZgU170AVrEZ3wtyY0WMo5DU8=; b=KvCs9Pp01lR7SOBJfoegP6PHZWoQKbPSjFRG0WWIe/mARnFKRveOP/IirfJcmknF2f 3hny/qR/RNU9IjImoBku2fV1iJ1IEAYTyyTnxlBpY1jEiHIL3LzVEbXPNOaqWaCxyv5f FtTEEDfOspwusuP0fWwFHj8kIPOYZvRqXTuB7G+edo1XQfIp7M+AXvIlB5VkpcGn4qgZ lBwKMOEDlHJ9BiqxqIKOj/e+aMaFY6XP1Rqq54JvTadV2OzgEt4HhZb5v8teufa4yUNn l65bzbLJFTpOxeR4gnNb7X6n/bjx2oz1YD6/Pwy8+jPFVcGVksUSvBEs1JxUwJRsmeR0 VA7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605995; x=1707210795; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=EEGg3+Udn4gaw+4rcyLZgU170AVrEZ3wtyY0WMo5DU8=; b=Tul7PVY37orFjqwG3gEW+bEuWDzajF9iWkZeapDlAjcGARVnxVfuxQGs2lJ8WOK/rB 6ASqlvYhkMqb4XQUXch8pMBMzt2+M7e1VctcFMUCCbzWzBJomQFG9+CMphZtrdqNOswd 0TMCZMSGOqexeFKHtnk1FYbKbbvTCiDXfrfIKeGc4OSFzJusmRU3FC4N4avFyeeh0Nhs 0GEu3KG1LKqS6st8t2B4AxpiPlMp99smGrUc796uJ9XC2EpMIvxmuSCJOOiK9DvBXfCW PYZP3BSy0FS9VIveJEEjPRwFb+a5NCpQG/Mm+AYJUp+wWWgHdBN6/q0wC1dtExha5fj5 BgdA== X-Gm-Message-State: AOJu0Yzyo2vWVZMO3u1hpmK/d2ZTNq6v/Nz3AiCDdJuf6EzUTMzgINCM EuxAElpmj09TLOoXtrA8dr2SngFz6Z5dyaaju6KRBuCtMKu0C34h X-Google-Smtp-Source: AGHT+IEhz145k+wYMd2pt24Zls59HTCphqzKgGsC+jAC2Krgc2hYPa5ZJi9Znr/YRZnvFiEQHxAFow== X-Received: by 2002:a17:902:9a0a:b0:1d8:b798:dfe3 with SMTP id v10-20020a1709029a0a00b001d8b798dfe3mr3573244plp.43.1706605995565; Tue, 30 Jan 2024 01:13:15 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id ki3-20020a170903068300b001d8e7db131fsm2895967plb.11.2024.01.30.01.13.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:15 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo , Alasdair Kergon , Mike Snitzer Subject: [PATCH 7/8] dm-crypt: Convert from tasklet to BH workqueue Date: Mon, 29 Jan 2024 23:11:54 -1000 Message-ID: <20240130091300.2968534-8-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only generic interface to execute asynchronously in the BH context is tasklet; however, it's marked deprecated and has some design flaws. To replace tasklets, BH workqueue support was recently added. A BH workqueue behaves similarly to regular workqueues except that the queued work items are executed in the BH context. This patch converts dm-crypt from tasklet to BH workqueue. Like a regular workqueue, a BH workqueue allows freeing the currently executing work item. Converting from tasklet to BH workqueue removes the need for deferring bio_endio() again to a work item, which was buggy anyway. I tested this lightly with "--perf-no_read_workqueue --perf-no_write_workqueue" + some code modifications, but would really -appreciate if someone who knows the code base better could take a look. Signed-off-by: Tejun Heo Link: http://lkml.kernel.org/r/82b964f0-c2c8-a2c6-5b1f-f3145dc2c8e5@redhat.com Cc: Mikulas Patocka Cc: Linus Torvalds Cc: Alasdair Kergon Cc: Mike Snitzer Cc: dm-devel@lists.linux.dev Reviewed-by: Mikulas Patocka --- drivers/md/dm-crypt.c | 36 ++---------------------------------- 1 file changed, 2 insertions(+), 34 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 855b482cbff1..619c762d4072 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -73,11 +73,8 @@ struct dm_crypt_io { struct bio *base_bio; u8 *integrity_metadata; bool integrity_metadata_from_pool:1; - bool in_tasklet:1; struct work_struct work; - struct tasklet_struct tasklet; - struct convert_context ctx; atomic_t io_pending; @@ -1762,7 +1759,6 @@ static void crypt_io_init(struct dm_crypt_io *io, struct crypt_config *cc, io->ctx.r.req = NULL; io->integrity_metadata = NULL; io->integrity_metadata_from_pool = false; - io->in_tasklet = false; atomic_set(&io->io_pending, 0); } @@ -1771,13 +1767,6 @@ static void crypt_inc_pending(struct dm_crypt_io *io) atomic_inc(&io->io_pending); } -static void kcryptd_io_bio_endio(struct work_struct *work) -{ - struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work); - - bio_endio(io->base_bio); -} - /* * One of the bios was finished. Check for completion of * the whole request and correctly clean up the buffer. @@ -1800,21 +1789,6 @@ static void crypt_dec_pending(struct dm_crypt_io *io) kfree(io->integrity_metadata); base_bio->bi_status = error; - - /* - * If we are running this function from our tasklet, - * we can't call bio_endio() here, because it will call - * clone_endio() from dm.c, which in turn will - * free the current struct dm_crypt_io structure with - * our tasklet. In this case we need to delay bio_endio() - * execution to after the tasklet is done and dequeued. - */ - if (io->in_tasklet) { - INIT_WORK(&io->work, kcryptd_io_bio_endio); - queue_work(cc->io_queue, &io->work); - return; - } - bio_endio(base_bio); } @@ -2246,11 +2220,6 @@ static void kcryptd_crypt(struct work_struct *work) kcryptd_crypt_write_convert(io); } -static void kcryptd_crypt_tasklet(unsigned long work) -{ - kcryptd_crypt((struct work_struct *)work); -} - static void kcryptd_queue_crypt(struct dm_crypt_io *io) { struct crypt_config *cc = io->cc; @@ -2263,9 +2232,8 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io) * it is being executed with irqs disabled. */ if (in_hardirq() || irqs_disabled()) { - io->in_tasklet = true; - tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work); - tasklet_schedule(&io->tasklet); + INIT_WORK(&io->work, kcryptd_crypt); + queue_work(system_bh_wq, &io->work); return; } From patchwork Tue Jan 30 09:11:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13537045 X-Patchwork-Delegate: snitzer@redhat.com Received: from mail-oo1-f54.google.com (mail-oo1-f54.google.com [209.85.161.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1752A679EF for ; Tue, 30 Jan 2024 09:13:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605999; cv=none; b=cjcKruE78xxPwKnvfPfWK9GFuWb8VT4+MbcQ1zVK9Eq0QCWI9/U/1F90iGZ/TkH4p+boWIalEhMp/FeJtt8nYfQD27HTjPHGlDaXXKPWhaxScHmQSc4ap35U2a2sBIvYP7Rb6LNf6CMio/h9cqVIP/WRTBDLZMuqz5nCrdb3rO4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706605999; c=relaxed/simple; bh=yxd9IHSWjzcBrPuUDO7+SCI1N0zvICZf05uTKbbfPXA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AvGxjly5iM+H29jokXv6wgFDDUN6c7XxxYyyTis9zougPD3eHiWFy1LpwYNrZhG+mhVTzlezvzGGO7MnJc8zRjFN6N+UuU3sMymfJN1n6pxUMJFx1WlPw4yVd6go6veHfPswnSl85hjr+e4dNFEumlRauUtvGvOq0mpnxKqW6vU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mcyaE0MV; arc=none smtp.client-ip=209.85.161.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mcyaE0MV" Received: by mail-oo1-f54.google.com with SMTP id 006d021491bc7-595d24ad466so2523739eaf.0 for ; Tue, 30 Jan 2024 01:13:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706605997; x=1707210797; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=kP7G/zPUbEjcGqCRjgaQMWAAf9wDrODicb4k9l2g++8=; b=mcyaE0MVESmrH8TGMrYVfdajgZeTv4YZk15PymS87wp5kI4qiYKzFao5XpNRhsQ3bP Z00tOlPqMozK3n3xkxyKpmra2WC0Yf7/B9JizUIVlwDr85pZwNQY6LsrzOrJU/1E7eZZ MHEMyNddszWGJrLxgtreElpxt+pbLupAqmaIxLWaR5/6S+JSkwrTdYahtxrDOHW0mIXK +gBCqCZD6fM7hwi5uY+XjifhZlQN1Hmexy8FTlvE4/9Nd1YWKZsbvP1AEDV5C8+YsRmO wdnWgePqrtVb+tj5qX0LZwd8q8HYVU+cV+L+knmBX87HiRxBORMhkee1ijeyvmXbFuC6 cJyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706605997; x=1707210797; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kP7G/zPUbEjcGqCRjgaQMWAAf9wDrODicb4k9l2g++8=; b=g8mCBJj3dYmCOHF3zU7zGDlqRJxwVb0Jh+iYPXo0rjNPYiJF3YfNGC6Pv/vLc2ICSB MIR+ijYZoEtELWyXzceidnCUYJcKPsPJEaMz+ZhkM73Ln+Ju/FuS1j+RPLrZVxm7XWqZ OwpPTiwo8tz8+porTilWkChQmHl8/ZheEJCBywCWl0QyCRdviauvRam3OZwlQadKLTMU wVVqFMrLPRu317CJPwfwpJLiaRJptZZ29pIZCELgPL4Sf32cdKBN/SCxTGPvho8d/fTg QjFXKpnvvxN9lYroX3jlL0wNX7Kb+WHTJzdZZ2u3L6I2rqYCGiy7MAXkcPJKDUm0fvOp c1OQ== X-Gm-Message-State: AOJu0YzNVh4cvaO+jzQPbaQMlrnbxrbn/kJ2prjDYWM7BXKC0VgoLfaZ fKyJrmOvG2ddrzQLKfYu+8qiiCLDskA7Csta8EShHl0i0zPCmCwC X-Google-Smtp-Source: AGHT+IE8Q4KG3NbeQFqt34/6WaZTRJSkgwKNvSafvHvYRaM+fZtDCLEugDIup6U8STh4gXX2K2JK/g== X-Received: by 2002:a05:6358:7e83:b0:178:7f7d:91a6 with SMTP id o3-20020a0563587e8300b001787f7d91a6mr2601345rwn.46.1706605997066; Tue, 30 Jan 2024 01:13:17 -0800 (PST) Received: from localhost (dhcp-141-239-144-21.hawaiiantel.net. [141.239.144.21]) by smtp.gmail.com with ESMTPSA id b4-20020a056a000a8400b006dde023cce8sm7211721pfl.57.2024.01.30.01.13.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 01:13:16 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mpatocka@redhat.com Cc: linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, msnitzer@redhat.com, ignat@cloudflare.com, damien.lemoal@wdc.com, bob.liu@oracle.com, houtao1@huawei.com, peterz@infradead.org, mingo@kernel.org, netdev@vger.kernel.org, allen.lkml@gmail.com, kernel-team@meta.com, Tejun Heo , Alasdair Kergon , Mike Snitzer Subject: [PATCH 8/8] dm-verity: Convert from tasklet to BH workqueue Date: Mon, 29 Jan 2024 23:11:55 -1000 Message-ID: <20240130091300.2968534-9-tj@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130091300.2968534-1-tj@kernel.org> References: <20240130091300.2968534-1-tj@kernel.org> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only generic interface to execute asynchronously in the BH context is tasklet; however, it's marked deprecated and has some design flaws. To replace tasklets, BH workqueue support was recently added. A BH workqueue behaves similarly to regular workqueues except that the queued work items are executed in the BH context. This patch converts dm-verity from tasklet to BH workqueue. This is a minimal conversion which doesn't rename the related names including the "try_verify_in_tasklet" option. If this patch is applied, a follow-up patch would be necessary. I couldn't decide whether the option name would need to be updated too. Only compile tested. I don't know how to verity. Signed-off-by: Tejun Heo Cc: Alasdair Kergon Cc: Mike Snitzer Cc: Mikulas Patocka Cc: dm-devel@lists.linux.dev --- drivers/md/dm-verity-target.c | 8 ++++---- drivers/md/dm-verity.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 14e58ae70521..911261de2d08 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -645,9 +645,9 @@ static void verity_work(struct work_struct *w) verity_finish_io(io, errno_to_blk_status(verity_verify_io(io))); } -static void verity_tasklet(unsigned long data) +static void verity_bh_work(struct work_struct *w) { - struct dm_verity_io *io = (struct dm_verity_io *)data; + struct dm_verity_io *io = container_of(w, struct dm_verity_io, bh_work); int err; io->in_tasklet = true; @@ -675,8 +675,8 @@ static void verity_end_io(struct bio *bio) } if (static_branch_unlikely(&use_tasklet_enabled) && io->v->use_tasklet) { - tasklet_init(&io->tasklet, verity_tasklet, (unsigned long)io); - tasklet_schedule(&io->tasklet); + INIT_WORK(&io->bh_work, verity_bh_work); + queue_work(system_bh_wq, &io->bh_work); } else { INIT_WORK(&io->work, verity_work); queue_work(io->v->verify_wq, &io->work); diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index f9d522c870e6..7c16f834f31a 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -83,7 +83,7 @@ struct dm_verity_io { struct bvec_iter iter; struct work_struct work; - struct tasklet_struct tasklet; + struct work_struct bh_work; /* * Three variably-size fields follow this struct: