From patchwork Mon Sep 16 10:13:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13805233 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6796149C4D; Mon, 16 Sep 2024 10:14:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726481659; cv=none; b=frbgFXSWfMUGCJyP7AZW3g6nXe9Uoh5Q53Dy+7qtUBbfbl45OTWVDqGRFpL1vqH68V35JdinUvHvnPzgs+chQLMzK8oycMJucPf8y0RJ66xt9SP/aCusfOzpRGf70Sy5JwVOrZKUzwc1m6Gb+P/IVBLLS5EV9Ssga9HS96uyVGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726481659; c=relaxed/simple; bh=+mOCLm5vpXETonbJPaPdyDgzqWO9T6eqfQY/D2t3yQw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lyoRDkKkZXcK05sERCPzYZKY0vkMHh8Mf6E6T6Ii+gmRPp6VYr/+kntrnAHUU13metlo2kOuwySORhpNGrYis73F+szl5aWjg781aCP2b3tG1u5ipOzxaCkJ+ZZS4LQ/ysnEI/UMo2u3pS9Wk0Qmhx4rdLbq02j0X6dolF4B00Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KHjUvQh7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KHjUvQh7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32A15C4CEC4; Mon, 16 Sep 2024 10:14:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726481659; bh=+mOCLm5vpXETonbJPaPdyDgzqWO9T6eqfQY/D2t3yQw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KHjUvQh7hCFg7nsGWyY47zsFiUFPcVLWm/2cevLFjBB3YsJosGFyHpMpGmmdXjwSF Qno7l498cuc086ndVAuKaIpin6k57zVP92u46HZvppiJzFQ+i7/evZllWyWqaCDHs6 NNNOuURm58srv3i60mEfAdZep09ozX7lrbHbV1nqSXHHYv5J57yZ2hnUslibbtfqmW CFv7x0D0TCfR9vVAWgkgyNMXgLekBFYzScuXz2VIomrFiu6vuACHFAhIZkVae2NHt/ E8jVxYwVYctsQ8IYCf/BVIZiwqYgJrV/MQjJKRv9agbPsY7xfE0HDFzVzgnt6/J0as pDt16GoJZ6/uw== From: Lorenzo Bianconi To: bpf@vger.kernel.org Cc: kuba@kernel.org, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, dxu@dxuuu.xyz, john.fastabend@gmail.com, hawk@kernel.org, martin.lau@linux.dev, davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, netdev@vger.kernel.org, lorenzo.bianconi@redhat.com Subject: [RFC/RFT v2 1/3] net: Add napi_init_for_gro routine Date: Mon, 16 Sep 2024 12:13:43 +0200 Message-ID: <1383fe802f0edbfb527e9d6c45729d31f7be6d32.1726480607.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Introduce napi_init_for_gro utility routine to initialize napi_struct for GRO. This is a preliminary patch to introduce GRO support to cpumap codebase. Signed-off-by: Lorenzo Bianconi --- include/linux/netdevice.h | 2 ++ net/core/dev.c | 23 +++++++++++++++++------ 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 607009150b5fa..3c4c3ae2170f0 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2628,6 +2628,8 @@ static inline void netif_napi_set_irq(struct napi_struct *napi, int irq) */ #define NAPI_POLL_WEIGHT 64 +int napi_init_for_gro(struct net_device *dev, struct napi_struct *napi, + int (*poll)(struct napi_struct *, int), int weight); void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, int (*poll)(struct napi_struct *, int), int weight); diff --git a/net/core/dev.c b/net/core/dev.c index f66e614078832..c87c510abc05b 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6638,13 +6638,14 @@ void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index, } EXPORT_SYMBOL(netif_queue_set_napi); -void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, - int (*poll)(struct napi_struct *, int), int weight) +int napi_init_for_gro(struct net_device *dev, struct napi_struct *napi, + int (*poll)(struct napi_struct *, int), int weight) { if (WARN_ON(test_and_set_bit(NAPI_STATE_LISTED, &napi->state))) - return; + return -EBUSY; INIT_LIST_HEAD(&napi->poll_list); + INIT_LIST_HEAD(&napi->dev_list); INIT_HLIST_NODE(&napi->napi_hash_node); hrtimer_init(&napi->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); napi->timer.function = napi_watchdog; @@ -6662,18 +6663,28 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, napi->poll_owner = -1; #endif napi->list_owner = -1; + napi_hash_add(napi); + napi_get_frags_check(napi); + netif_napi_set_irq(napi, -1); + + return 0; +} + +void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, + int (*poll)(struct napi_struct *, int), int weight) +{ + if (napi_init_for_gro(dev, napi, poll, weight)) + return; + set_bit(NAPI_STATE_SCHED, &napi->state); set_bit(NAPI_STATE_NPSVC, &napi->state); list_add_rcu(&napi->dev_list, &dev->napi_list); - napi_hash_add(napi); - napi_get_frags_check(napi); /* Create kthread for this napi if dev->threaded is set. * Clear dev->threaded if kthread creation failed so that * threaded mode will not be enabled in napi_enable(). */ if (dev->threaded && napi_kthread_create(napi)) dev->threaded = false; - netif_napi_set_irq(napi, -1); } EXPORT_SYMBOL(netif_napi_add_weight); From patchwork Mon Sep 16 10:13:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13805234 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAEFF14C5BE; Mon, 16 Sep 2024 10:14:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726481663; cv=none; b=aIc+GPlvnXrM/xLVZrrivGoFLWUV91U0mGFC25Tm2k7u9Ep6wMG6JwptFHiwTSmhn+tu/F9+vCzNlAubsW45xZ/dhIkdzZO8Q9mfTxCWEHgCT2fw40Y8LK6SSE3oFAhDHGvpDdyu3DxZyRr9zaujMmLVz/3L45ARyscAX55Zako= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726481663; c=relaxed/simple; bh=WzkrBChHJ36YKRbBm353jn1KFaPOZmb/kSOGEjd9Rqc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iOlzusu3GCQdGtq6jJi0TH83HFmHOLdq9YGnYZBvHy1iK7o4LV+BCtStcXDAAX84uURu+C7FBbum8n4fyTpO9N9cX+76G9Boa17HOaA+BU4vinLv3Z3GjtWSEXJ1yM5qMgmL6e5JJRBdkCixcRZ0j17s7zMZOXoe1r0k7S3mhg8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XbjfP299; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XbjfP299" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0ECC4C4CEC4; Mon, 16 Sep 2024 10:14:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726481662; bh=WzkrBChHJ36YKRbBm353jn1KFaPOZmb/kSOGEjd9Rqc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XbjfP299ot5pJWGIn35Nqjl/g6lw5tDCeamfZ1NEmWI0QMztgcViEAZZw5YfSWfP+ /52Z8J5hsvtX7fvXpVqXFat2H/kvw/iKlO2MCNxuZc0nornmsCRwK3rEowGL/PEBWN VLB3Me5EJA5/jy24dXqDU0fRJVqttdQHHsocZRYga6gmzLXhNvrHeMDX1cdCDufMfM qJN1avJRTRSzlwe6UaeYJC+kAVQQwy0wqv8om58OX81gqSDAZ8ajlcWg42k0eBbUae +Wv7m3QfBojEtpkfxD/jXw8ZhuPXuuE1JdhisGEcvL5iLjQfHpKfVoZwQLMsPBoVjb nQqvQRMH/dZtQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org Cc: kuba@kernel.org, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, dxu@dxuuu.xyz, john.fastabend@gmail.com, hawk@kernel.org, martin.lau@linux.dev, davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, netdev@vger.kernel.org, lorenzo.bianconi@redhat.com Subject: [RFC/RFT v2 2/3] net: add napi_threaded_poll to netdevice.h Date: Mon, 16 Sep 2024 12:13:44 +0200 Message-ID: X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Move napi_threaded_poll routine declaration in netdevice.h and remove static keyword in order to reuse it in cpumap codebase. Signed-off-by: Lorenzo Bianconi --- include/linux/netdevice.h | 1 + net/core/dev.c | 4 +--- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 3c4c3ae2170f0..3bf7e22965cd5 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2628,6 +2628,7 @@ static inline void netif_napi_set_irq(struct napi_struct *napi, int irq) */ #define NAPI_POLL_WEIGHT 64 +int napi_threaded_poll(void *data); int napi_init_for_gro(struct net_device *dev, struct napi_struct *napi, int (*poll)(struct napi_struct *, int), int weight); void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, diff --git a/net/core/dev.c b/net/core/dev.c index c87c510abc05b..8c1b3d1df261d 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1417,8 +1417,6 @@ void netdev_notify_peers(struct net_device *dev) } EXPORT_SYMBOL(netdev_notify_peers); -static int napi_threaded_poll(void *data); - static int napi_kthread_create(struct napi_struct *n) { int err = 0; @@ -6922,7 +6920,7 @@ static void napi_threaded_poll_loop(struct napi_struct *napi) } } -static int napi_threaded_poll(void *data) +int napi_threaded_poll(void *data) { struct napi_struct *napi = data; From patchwork Mon Sep 16 10:13:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13805235 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DD0D7641E; Mon, 16 Sep 2024 10:14:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726481665; cv=none; b=lcDKlTuZlS2adVJJyl3gGONsviU4xVn9wsCaPsC0gfaqsEyIGu4YdyJ19Cl9W0Xj/Z7MNilXorkfM1GAvND52iUr6e9gBbE6lbimDv1GG4yuUrgTpw6cLZdemjeisU8+5UoFxizistcw9L7gOukzezKVCISO1yXepDdmHjeX/Ho= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726481665; c=relaxed/simple; bh=+2DxlM4IvnRU/a01sOXUn8LypP6Ud4qDlfTF92jSVRk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X/1i5VscFkA4Uh/VFj4/N4iHfrH8SodZxmLI4QS+Ro9VLx4lSBvk2iPxl90eBWRS5ZoKDFikbGpTwxDRGWTLAT43lMF3eMcjRRxrEjLqD04JIFqPvCzrXkTEMRTauGvKoSOEJorQgppXFjAKiqQk0T5Bvolb0vK8dNTbeRDj7Rc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BwQDjMi+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BwQDjMi+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0B33C4CEC4; Mon, 16 Sep 2024 10:14:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726481665; bh=+2DxlM4IvnRU/a01sOXUn8LypP6Ud4qDlfTF92jSVRk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BwQDjMi+eTNW4g49OWnFT+AJfkWkmCQhDfBn0RA6TUC3qgewXbZx51t35iezdq0Vp YZnfYN3HASiBEiyWtiUtEroOoDRVOjwbMhsl1RRt/A/CxUntTt/Y1tQhAuJkG5LD6O kjnvxdLXuvxy7XIDsrYpoMcG0qsp8iPuv3Zey1Mt5kdMsP4R2JCJr04E2V6z2/XFtG FSgZ/GFUhQvzFGQftaZNhBYKdxdlhHlQowKmFncplbxkZT2pNBMNRNz6O6WiabBkFK OsDbDegUE2oNfeXFujd2BaDUmvvbe08kQebmkGFFKr+BAqhiGIvPaMH2ShqL2o1Xh2 m9AKhtDrvgdWQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org Cc: kuba@kernel.org, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, dxu@dxuuu.xyz, john.fastabend@gmail.com, hawk@kernel.org, martin.lau@linux.dev, davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, netdev@vger.kernel.org, lorenzo.bianconi@redhat.com Subject: [RFC/RFT v2 3/3] bpf: cpumap: Add gro support Date: Mon, 16 Sep 2024 12:13:45 +0200 Message-ID: X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Introduce GRO support to cpumap codebase moving the cpu_map_entry kthread to a NAPI-kthread pinned on the selected cpu. Signed-off-by: Lorenzo Bianconi --- kernel/bpf/cpumap.c | 123 +++++++++++++++++++------------------------- 1 file changed, 52 insertions(+), 71 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index fbdf5a1aabfe4..3ec6739aec5ae 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -62,9 +62,11 @@ struct bpf_cpu_map_entry { /* XDP can run multiple RX-ring queues, need __percpu enqueue store */ struct xdp_bulk_queue __percpu *bulkq; - /* Queue with potential multi-producers, and single-consumer kthread */ + /* Queue with potential multi-producers, and single-consumer + * NAPI-kthread + */ struct ptr_ring *queue; - struct task_struct *kthread; + struct napi_struct napi; struct bpf_cpumap_val value; struct bpf_prog *prog; @@ -261,58 +263,42 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, return nframes; } -static int cpu_map_kthread_run(void *data) +static int cpu_map_poll(struct napi_struct *napi, int budget) { - struct bpf_cpu_map_entry *rcpu = data; - unsigned long last_qs = jiffies; + struct xdp_cpumap_stats stats = {}; /* zero stats */ + unsigned int kmem_alloc_drops = 0; + struct bpf_cpu_map_entry *rcpu; + int done = 0; + rcu_read_lock(); + rcpu = container_of(napi, struct bpf_cpu_map_entry, napi); complete(&rcpu->kthread_running); - set_current_state(TASK_INTERRUPTIBLE); - /* When kthread gives stop order, then rcpu have been disconnected - * from map, thus no new packets can enter. Remaining in-flight - * per CPU stored packets are flushed to this queue. Wait honoring - * kthread_stop signal until queue is empty. - */ - while (!kthread_should_stop() || !__ptr_ring_empty(rcpu->queue)) { - struct xdp_cpumap_stats stats = {}; /* zero stats */ - unsigned int kmem_alloc_drops = 0, sched = 0; + while (done < budget) { gfp_t gfp = __GFP_ZERO | GFP_ATOMIC; - int i, n, m, nframes, xdp_n; + int n, i, m, xdp_n = 0, nframes; void *frames[CPUMAP_BATCH]; + struct sk_buff *skb, *tmp; void *skbs[CPUMAP_BATCH]; LIST_HEAD(list); - /* Release CPU reschedule checks */ - if (__ptr_ring_empty(rcpu->queue)) { - set_current_state(TASK_INTERRUPTIBLE); - /* Recheck to avoid lost wake-up */ - if (__ptr_ring_empty(rcpu->queue)) { - schedule(); - sched = 1; - last_qs = jiffies; - } else { - __set_current_state(TASK_RUNNING); - } - } else { - rcu_softirq_qs_periodic(last_qs); - sched = cond_resched(); - } - + if (__ptr_ring_empty(rcpu->queue)) + break; /* * The bpf_cpu_map_entry is single consumer, with this * kthread CPU pinned. Lockless access to ptr_ring * consume side valid as no-resize allowed of queue. */ - n = __ptr_ring_consume_batched(rcpu->queue, frames, - CPUMAP_BATCH); - for (i = 0, xdp_n = 0; i < n; i++) { + n = min(budget - done, CPUMAP_BATCH); + n = __ptr_ring_consume_batched(rcpu->queue, frames, n); + done += n; + + for (i = 0; i < n; i++) { void *f = frames[i]; struct page *page; if (unlikely(__ptr_test_bit(0, &f))) { - struct sk_buff *skb = f; - + skb = f; __ptr_clear_bit(0, &skb); list_add_tail(&skb->list, &list); continue; @@ -340,12 +326,10 @@ static int cpu_map_kthread_run(void *data) } } - local_bh_disable(); for (i = 0; i < nframes; i++) { struct xdp_frame *xdpf = frames[i]; - struct sk_buff *skb = skbs[i]; - skb = __xdp_build_skb_from_frame(xdpf, skb, + skb = __xdp_build_skb_from_frame(xdpf, skbs[i], xdpf->dev_rx); if (!skb) { xdp_return_frame(xdpf); @@ -354,17 +338,21 @@ static int cpu_map_kthread_run(void *data) list_add_tail(&skb->list, &list); } - netif_receive_skb_list(&list); - - /* Feedback loop via tracepoint */ - trace_xdp_cpumap_kthread(rcpu->map_id, n, kmem_alloc_drops, - sched, &stats); - local_bh_enable(); /* resched point, may call do_softirq() */ + list_for_each_entry_safe(skb, tmp, &list, list) { + skb_list_del_init(skb); + napi_gro_receive(napi, skb); + } } - __set_current_state(TASK_RUNNING); - return 0; + rcu_read_unlock(); + /* Feedback loop via tracepoint */ + trace_xdp_cpumap_kthread(rcpu->map_id, done, kmem_alloc_drops, 0, + &stats); + if (done < budget) + napi_complete(napi); + + return done; } static int __cpu_map_load_bpf_program(struct bpf_cpu_map_entry *rcpu, @@ -432,18 +420,19 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value, if (fd > 0 && __cpu_map_load_bpf_program(rcpu, map, fd)) goto free_ptr_ring; + napi_init_for_gro(NULL, &rcpu->napi, cpu_map_poll, + NAPI_POLL_WEIGHT); + set_bit(NAPI_STATE_THREADED, &rcpu->napi.state); + /* Setup kthread */ init_completion(&rcpu->kthread_running); - rcpu->kthread = kthread_create_on_node(cpu_map_kthread_run, rcpu, numa, - "cpumap/%d/map:%d", cpu, - map->id); - if (IS_ERR(rcpu->kthread)) + rcpu->napi.thread = kthread_run_on_cpu(napi_threaded_poll, + &rcpu->napi, cpu, + "cpumap-napi/%d"); + if (IS_ERR(rcpu->napi.thread)) goto free_prog; - /* Make sure kthread runs on a single CPU */ - kthread_bind(rcpu->kthread, cpu); - wake_up_process(rcpu->kthread); - + napi_schedule(&rcpu->napi); /* Make sure kthread has been running, so kthread_stop() will not * stop the kthread prematurely and all pending frames or skbs * will be handled by the kthread before kthread_stop() returns. @@ -477,12 +466,8 @@ static void __cpu_map_entry_free(struct work_struct *work) */ rcpu = container_of(to_rcu_work(work), struct bpf_cpu_map_entry, free_work); - /* kthread_stop will wake_up_process and wait for it to complete. - * cpu_map_kthread_run() makes sure the pointer ring is empty - * before exiting. - */ - kthread_stop(rcpu->kthread); - + napi_disable(&rcpu->napi); + __netif_napi_del(&rcpu->napi); if (rcpu->prog) bpf_prog_put(rcpu->prog); /* The queue should be empty at this point */ @@ -498,8 +483,8 @@ static void __cpu_map_entry_free(struct work_struct *work) * __cpu_map_entry_free() in a separate workqueue after waiting for an RCU grace * period. This means that (a) all pending enqueue and flush operations have * completed (because of the RCU callback), and (b) we are in a workqueue - * context where we can stop the kthread and wait for it to exit before freeing - * everything. + * context where we can stop the NAPI-kthread and wait for it to exit before + * freeing everything. */ static void __cpu_map_entry_replace(struct bpf_cpu_map *cmap, u32 key_cpu, struct bpf_cpu_map_entry *rcpu) @@ -579,9 +564,7 @@ static void cpu_map_free(struct bpf_map *map) */ synchronize_rcu(); - /* The only possible user of bpf_cpu_map_entry is - * cpu_map_kthread_run(). - */ + /* The only possible user of bpf_cpu_map_entry is the NAPI-kthread. */ for (i = 0; i < cmap->map.max_entries; i++) { struct bpf_cpu_map_entry *rcpu; @@ -589,7 +572,7 @@ static void cpu_map_free(struct bpf_map *map) if (!rcpu) continue; - /* Stop kthread and cleanup entry directly */ + /* Stop NAPI-kthread and cleanup entry directly */ __cpu_map_entry_free(&rcpu->free_work.work); } bpf_map_area_free(cmap->cpu_map); @@ -753,7 +736,7 @@ int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu, if (ret < 0) goto trace; - wake_up_process(rcpu->kthread); + napi_schedule(&rcpu->napi); trace: trace_xdp_cpumap_enqueue(rcpu->map_id, !ret, !!ret, rcpu->cpu); return ret; @@ -765,8 +748,6 @@ void __cpu_map_flush(struct list_head *flush_list) list_for_each_entry_safe(bq, tmp, flush_list, flush_node) { bq_flush_to_queue(bq); - - /* If already running, costs spin_lock_irqsave + smb_mb */ - wake_up_process(bq->obj->kthread); + napi_schedule(&bq->obj->napi); } }