From patchwork Fri Oct 7 16:56:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13001274 X-Patchwork-Delegate: mkubecek+ethtool@suse.cz Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8F08C433FE for ; Fri, 7 Oct 2022 16:56:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229801AbiJGQ4q (ORCPT ); Fri, 7 Oct 2022 12:56:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229513AbiJGQ4o (ORCPT ); Fri, 7 Oct 2022 12:56:44 -0400 Received: from mail-il1-x130.google.com (mail-il1-x130.google.com [IPv6:2607:f8b0:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC2545F113 for ; Fri, 7 Oct 2022 09:56:43 -0700 (PDT) Received: by mail-il1-x130.google.com with SMTP id q18so26416ils.12 for ; Fri, 07 Oct 2022 09:56:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4UKLX7j9DF0b0nVBHqCcizEjwTs/7OhMEZz7ThiWmPU=; b=4XlkKsLZxCrt+c9fmGB5bq2MTyFakYDlhyvSOt757T7suNWNTtXfhxzaLcHiqtH5yD /J6qS/O2YJPeWXDF/LOzsCLN3i9uKcAyuvDvLzX6z9+F4k7G5bikxOcHzaS+b2BAPc14 NMKcjMkjJ7eplGMRtkarPXexSfQUOo1BMenVGhCL9/IdP7OJqW4W5MW2uDzaN0iwWm3H Exnsa3Nkdn9g7WVMgmFtbAxQr1OKWGz9dZi4YgDKRSgXAn98OHwfw4Lm2kz3Rjj1HfpA CgTJbpLBY/zARLGF7FlJM2gjujUuUsLZnuBxAyw50Je2StP4d2HAn7rUXZ5MZvBnS8lv dNRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4UKLX7j9DF0b0nVBHqCcizEjwTs/7OhMEZz7ThiWmPU=; b=n/+4LAUswfq+pGoMokGEJY8MKKs+uFD9Tnwy6m4llLgxHdCyfPMXRpwIJFXcdBudcE OBaD1nD1OCTF2rs3CC/g83/daDl10x/m/jxZUDKukeq7iS2I4tpeP5/JCK8Us2+Cayka MYQ6HcpRt6WrpPSCAYar3ywUoVcmAnSxA1mYED+RGyUkiWpE7TfMCd+g6fImqo/mtyq5 zoAwy6Zo0QvaiAM3etTPQWJyzUQQeCPOA5v+aXKFaozgnNByJO0bx50b7NdAO22jjmuf V+xpQs3IGIhm0OOk4zW8ljk4TXfNiQFg5mZqYtCohZJKbGj3D2xNGABg1it/x6Qarzt3 7YZA== X-Gm-Message-State: ACrzQf2tNCEQMtq1ORbXJ5ozBJ4MQ1oIHFSGR9N8VaAi5MrST0ZXsaxs xhLrT2qrwlJBqOUVR7u4aGG7QWyLoNpTVQ== X-Google-Smtp-Source: AMsMyM4OvaNbHc5o7fDN2kfr0Oa71t2vvJjYaC8Pb78cW2Rj1iHSsk0tf7UB7bsSc96VGdKdM1lWdw== X-Received: by 2002:a92:6912:0:b0:2ea:fa2e:462d with SMTP id e18-20020a926912000000b002eafa2e462dmr2830314ilc.155.1665161803214; Fri, 07 Oct 2022 09:56:43 -0700 (PDT) Received: from m1max.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id a6-20020a056e020e0600b002eb5eb4f8f9sm1055584ilk.77.2022.10.07.09.56.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Oct 2022 09:56:42 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 1/4] eventpoll: cleanup branches around sleeping for events Date: Fri, 7 Oct 2022 10:56:34 -0600 Message-Id: <20221007165637.22374-2-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221007165637.22374-1-axboe@kernel.dk> References: <20221007165637.22374-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Rather than have two separate branches here, collapse them into a single one instead. No functional changes here, just a cleanup in preparation for changes in this area. Signed-off-by: Jens Axboe --- fs/eventpoll.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 8b56b94e2f56..8a75ae70e312 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1869,14 +1869,15 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * important. */ eavail = ep_events_available(ep); - if (!eavail) + if (!eavail) { __add_wait_queue_exclusive(&ep->wq, &wait); - - write_unlock_irq(&ep->lock); - - if (!eavail) + write_unlock_irq(&ep->lock); timed_out = !schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS); + } else { + write_unlock_irq(&ep->lock); + } + __set_current_state(TASK_RUNNING); /* From patchwork Fri Oct 7 16:56:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13001276 X-Patchwork-Delegate: mkubecek+ethtool@suse.cz Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DEC9C43217 for ; Fri, 7 Oct 2022 16:56:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230002AbiJGQ4t (ORCPT ); Fri, 7 Oct 2022 12:56:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229805AbiJGQ4q (ORCPT ); Fri, 7 Oct 2022 12:56:46 -0400 Received: from mail-io1-xd2e.google.com (mail-io1-xd2e.google.com [IPv6:2607:f8b0:4864:20::d2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05A41604AC for ; Fri, 7 Oct 2022 09:56:45 -0700 (PDT) Received: by mail-io1-xd2e.google.com with SMTP id p70so4063602iod.13 for ; Fri, 07 Oct 2022 09:56:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RG3Q+CFVM1jwdBZfpkSgMBovuxPpkHVLQimnfbDB7qo=; b=Ccn6u7yVrMjm4PfhysRT3eZJKgn/nieVJoC5uVIx4IaYNlhnUM3QK47Us6ax5DQgkJ 90L+c9/TwnUfzgBXT0g3JoyAkM+vHiRL0xr1TUfB86vmCZE4ALJPTL97tAsqgd5k717M c5AzVt5LrbaZJcVczzLXh/1DvVTkNpQGrYsjnlX7zlW+dNB+UAFHe/Cg6pRXnyfTP5Zk PTywbUxXsLI1KEtM0yN6pIeIqS7lGG06PG9Z77r5TrvvRR1hYm9PNQYH/8AtrpoeGrQe ReUinQgm1CytQaM0wKeLii6nqIgy57XSohHaR/JrdcpRIlu0ReF4jnw2sBXKsC/LAHSp DCJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RG3Q+CFVM1jwdBZfpkSgMBovuxPpkHVLQimnfbDB7qo=; b=BJtC0rmFj3elYznfR58qDjyL3pCCmvycNsIuh9b1JJfgge5J8D2WIMtFdsOAbM0Gvc V8dPfIO6hV6SqOy2iuhqXSiZWJ/linJ2WpWEZjigKj/u/BQ/LskitEAr92iXOg9dNP4O +W6uHsZRm9hKb2uUt6OcSZ5CcaYw+e/tEqDugkpjrm//y+/cXstBTZZIW6MGWLBsctif zDuL5zolkz+r6L9nudoLNLUcYMmV0NbSPqiIqMjtxWpJY4A/m8dQVQFHsyvJ3idBiRUt 7bwgAWmjqBy3lc5EW9SDHrcXI33ndYg+i4xCGsoa+PyOhixPZPVG1+t7XMASTevpSx5J 4Nfw== X-Gm-Message-State: ACrzQf38FeDud9Z5tQ/jdCxMUj3mGEwBq5ans1q6ZbjHT+dgvjoxeaKC +CO5m/rOh5kpG6J6nKPMr7OQFg== X-Google-Smtp-Source: AMsMyM4yKZrnfwt2NxMdj+RlCbLqnMTEgMZPruyOyr65JkMUdnD3dNvnizz9JoX9XFqH/FAnikkodw== X-Received: by 2002:a5d:924b:0:b0:6a4:c19d:c5b3 with SMTP id e11-20020a5d924b000000b006a4c19dc5b3mr2740837iol.147.1665161804288; Fri, 07 Oct 2022 09:56:44 -0700 (PDT) Received: from m1max.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id a6-20020a056e020e0600b002eb5eb4f8f9sm1055584ilk.77.2022.10.07.09.56.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Oct 2022 09:56:43 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/4] eventpoll: split out wait handling Date: Fri, 7 Oct 2022 10:56:35 -0600 Message-Id: <20221007165637.22374-3-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221007165637.22374-1-axboe@kernel.dk> References: <20221007165637.22374-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In preparation for making changes to how wakeups and sleeps are done, move the timeout scheduling into a helper and manage it rather than rely on schedule_hrtimeout_range(). Signed-off-by: Jens Axboe --- fs/eventpoll.c | 70 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 56 insertions(+), 14 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 8a75ae70e312..01b9dab2b68c 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1762,6 +1762,47 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, return ret; } +struct epoll_wq { + wait_queue_entry_t wait; + struct hrtimer timer; + bool timed_out; +}; + +static enum hrtimer_restart ep_timer(struct hrtimer *timer) +{ + struct epoll_wq *ewq = container_of(timer, struct epoll_wq, timer); + struct task_struct *task = ewq->wait.private; + + ewq->timed_out = true; + wake_up_process(task); + return HRTIMER_NORESTART; +} + +static void ep_schedule(struct eventpoll *ep, struct epoll_wq *ewq, ktime_t *to, + u64 slack) +{ + if (ewq->timed_out) + return; + if (to && *to == 0) { + ewq->timed_out = true; + return; + } + if (!to) { + schedule(); + return; + } + + hrtimer_init_on_stack(&ewq->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + ewq->timer.function = ep_timer; + hrtimer_set_expires_range_ns(&ewq->timer, *to, slack); + hrtimer_start_expires(&ewq->timer, HRTIMER_MODE_ABS); + + schedule(); + + hrtimer_cancel(&ewq->timer); + destroy_hrtimer_on_stack(&ewq->timer); +} + /** * ep_poll - Retrieves ready events, and delivers them to the caller-supplied * event buffer. @@ -1782,13 +1823,15 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, int maxevents, struct timespec64 *timeout) { - int res, eavail, timed_out = 0; + int res, eavail; u64 slack = 0; - wait_queue_entry_t wait; ktime_t expires, *to = NULL; + struct epoll_wq ewq; lockdep_assert_irqs_enabled(); + ewq.timed_out = false; + if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { slack = select_estimate_accuracy(timeout); to = &expires; @@ -1798,7 +1841,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * Avoid the unnecessary trip to the wait queue loop, if the * caller specified a non blocking operation. */ - timed_out = 1; + ewq.timed_out = 1; } /* @@ -1823,10 +1866,10 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, return res; } - if (timed_out) + if (ewq.timed_out) return 0; - eavail = ep_busy_loop(ep, timed_out); + eavail = ep_busy_loop(ep, ewq.timed_out); if (eavail) continue; @@ -1850,8 +1893,8 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * performance issue if a process is killed, causing all of its * threads to wake up without being removed normally. */ - init_wait(&wait); - wait.func = ep_autoremove_wake_function; + init_wait(&ewq.wait); + ewq.wait.func = ep_autoremove_wake_function; write_lock_irq(&ep->lock); /* @@ -1870,10 +1913,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, */ eavail = ep_events_available(ep); if (!eavail) { - __add_wait_queue_exclusive(&ep->wq, &wait); + __add_wait_queue_exclusive(&ep->wq, &ewq.wait); write_unlock_irq(&ep->lock); - timed_out = !schedule_hrtimeout_range(to, slack, - HRTIMER_MODE_ABS); + ep_schedule(ep, &ewq, to, slack); } else { write_unlock_irq(&ep->lock); } @@ -1887,7 +1929,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, */ eavail = 1; - if (!list_empty_careful(&wait.entry)) { + if (!list_empty_careful(&ewq.wait.entry)) { write_lock_irq(&ep->lock); /* * If the thread timed out and is not on the wait queue, @@ -1896,9 +1938,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * Thus, when wait.entry is empty, it needs to harvest * events. */ - if (timed_out) - eavail = list_empty(&wait.entry); - __remove_wait_queue(&ep->wq, &wait); + if (ewq.timed_out) + eavail = list_empty(&ewq.wait.entry); + __remove_wait_queue(&ep->wq, &ewq.wait); write_unlock_irq(&ep->lock); } } From patchwork Fri Oct 7 16:56:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13001277 X-Patchwork-Delegate: mkubecek+ethtool@suse.cz Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9268AC433FE for ; Fri, 7 Oct 2022 16:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230061AbiJGQ4u (ORCPT ); Fri, 7 Oct 2022 12:56:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229845AbiJGQ4q (ORCPT ); Fri, 7 Oct 2022 12:56:46 -0400 Received: from mail-io1-xd36.google.com (mail-io1-xd36.google.com [IPv6:2607:f8b0:4864:20::d36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFB695FDFD for ; Fri, 7 Oct 2022 09:56:45 -0700 (PDT) Received: by mail-io1-xd36.google.com with SMTP id o65so4082420iof.4 for ; Fri, 07 Oct 2022 09:56:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=268Ewj1SkBC8anSnii6XOQa/SAiq+mm/PdGL6GPMikM=; b=um2f4lAP90/5sPo8x7U02EbQNucStn9G1tMe/aYsv1/wSysgTPyhwAHdy3uyIZbO3I R12S393agNl2gl6/84qpfQwxUX3x333wI6RscwNaHiEOZHuZfjA8/+52qLIVb219+sdy q6VKkIsHKVSXARmljPQZoxd4ftfCeBmrHOQYKFhKVTFbuHOAlX2kyDD/GgtCDRJT9v6q QlbLUIWcFfRHG8DxYC84sN0izcQBncZ06z3HsRa/OZMQkGR/UwolfKYT6TjLHlP1aj9U d8X7hX7JoO1ZY/CUnu7TEeMcO95rb4rJVOsH9iWdvOfTZK3aJjYuxqDaeodo6W5cBhPh SO8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=268Ewj1SkBC8anSnii6XOQa/SAiq+mm/PdGL6GPMikM=; b=Q0t1hRGC8TqSDAphYyHohYkFkOlU5b1L2XdfVJ+0bV9XP/GsmN42Wfpcj0NZenbyEV FY5qvnJuOoBYOclJjXKqE40ei7YipmftIi+jnrWKG28SyHVma62xthOBw1q6myC95NP0 bjtvhMhk97Fpqm7SPNtw8xhDtw9ODUXU6bPz87F6dUsfOxJM2JR8WmxO3H7QiGUDwWPL P6tuqbwWG5gzooiPE8PAlhf8C0N1AhBijwr1wyqAzC2+yljZRcK+bmt2oRJLSPKtKeKI RN8oxuwsDppytdCcj47oF8BMyGk7dZivzUwmXL0BZP/bsL0ot3A2Xi5+bKvXhX2ZulkG /O8Q== X-Gm-Message-State: ACrzQf0xExxMfOAWFmjhV8B8AWB0F8mRDbNNn/f8IG+wvLvaWHTWkC0M f0xrTS0xIqx03ZcfuTJOFYRMkA== X-Google-Smtp-Source: AMsMyM5cab4DrLGmE2R6/vLkQVAh+Bjf5n/flcaTschxphy9xhCDDpVwHtoi+2sscPY8PSz1btOb/g== X-Received: by 2002:a05:6602:2cd3:b0:6a2:167d:1d1c with SMTP id j19-20020a0566022cd300b006a2167d1d1cmr2679773iow.18.1665161805095; Fri, 07 Oct 2022 09:56:45 -0700 (PDT) Received: from m1max.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id a6-20020a056e020e0600b002eb5eb4f8f9sm1055584ilk.77.2022.10.07.09.56.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Oct 2022 09:56:44 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/4] eventpoll: move expires to epoll_wq Date: Fri, 7 Oct 2022 10:56:36 -0600 Message-Id: <20221007165637.22374-4-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221007165637.22374-1-axboe@kernel.dk> References: <20221007165637.22374-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This makes the expiration available to the wakeup handler. No functional changes expected in this patch, purely in preparation for being able to use the timeout on the wakeup side. Signed-off-by: Jens Axboe --- fs/eventpoll.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 01b9dab2b68c..79aa61a951df 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1765,6 +1765,7 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, struct epoll_wq { wait_queue_entry_t wait; struct hrtimer timer; + ktime_t timeout_ts; bool timed_out; }; @@ -1825,7 +1826,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, { int res, eavail; u64 slack = 0; - ktime_t expires, *to = NULL; + ktime_t *to = NULL; struct epoll_wq ewq; lockdep_assert_irqs_enabled(); @@ -1834,7 +1835,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { slack = select_estimate_accuracy(timeout); - to = &expires; + to = &ewq.timeout_ts; *to = timespec64_to_ktime(*timeout); } else if (timeout) { /* From patchwork Fri Oct 7 16:56:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13001278 X-Patchwork-Delegate: mkubecek+ethtool@suse.cz Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CADFFC4332F for ; Fri, 7 Oct 2022 16:56:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229845AbiJGQ4w (ORCPT ); Fri, 7 Oct 2022 12:56:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229990AbiJGQ4s (ORCPT ); Fri, 7 Oct 2022 12:56:48 -0400 Received: from mail-il1-x130.google.com (mail-il1-x130.google.com [IPv6:2607:f8b0:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E62F60525 for ; Fri, 7 Oct 2022 09:56:46 -0700 (PDT) Received: by mail-il1-x130.google.com with SMTP id q18so26487ils.12 for ; Fri, 07 Oct 2022 09:56:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dbBXQdH6Biw9ZL6BC4iovaHCIiANA2YGcpyF37wDdyQ=; b=j2Zh2OIT0OBTcwGeFc70NIm4U/Jv7GRw+g0fEMIWqc4lpGkyFIj1mqhS66tb0RPbt1 4f9Of5Armjd6tTRT51mUf2jemRrDBuUb5rLeHNaNC5wAFLV9+6+dL8xRtWOgwSjsqwIZ AUsb9ML3qkDeMdrTsYLAPPTFof5Tl+Kf+z5ZbqEMQ/NDXgslx+VWq3c/u0ExvrOxygJE crUC9NFh6Em0RBufTSeEcMjopu8sE/b3OhmWtH91382vgW5T2yLCXdMNOj3kLyJmo+Be GemtPUuBbbaJoGpceIckTAcc521ad0TBqdglZcF4kb4PgSx/0XtP6+1ES8gMtT6JqTaS KJAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dbBXQdH6Biw9ZL6BC4iovaHCIiANA2YGcpyF37wDdyQ=; b=2ktqH2XHF1rlPnZLYsYG9oqMSQglyhW9l+lqM8IYRowit+UAQpCr5vx/7BXkULSjh8 qZRgbG9Ha81NJTWdauO21oS4psx0ShhBBiVhiP92UIfOU84H8Vkh7Vpg+6g4JmJFCD9Y vRLJ8oadSeLL6K7DbbSHGo/HDgUwme++Q7GspurB+sn1Idvvk7ZUs+8HyQwScdOItNtZ +hHZmQPzibg/645+V9AOAPEzM2ppCH9Ev4/HXd43vmfRxrvYprff6nt2hqa5qO+G+ZFe Adn7yK+Vq+9yD7GyE4/We2wIWafBY5v4qqV4TiSeiwLUdOSGg/PD5Amdmc+VFYbBGB1n XJlg== X-Gm-Message-State: ACrzQf2ZMQx3AUQbONcOB78VZ9BFcQ4fPrlG9sLAnPnfF8FB0Pia4FjY o24EVYS9wrODCQEAgc+w9DbS9A== X-Google-Smtp-Source: AMsMyM4cd19LOkCQXBpjx9j+X3RTH9HqoJfd+r/KT/Uep1nNboL/FXYLPHw/WV/JS7akfIniviVdtA== X-Received: by 2002:a92:c568:0:b0:2f9:e77d:293c with SMTP id b8-20020a92c568000000b002f9e77d293cmr2680095ilj.319.1665161806162; Fri, 07 Oct 2022 09:56:46 -0700 (PDT) Received: from m1max.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id a6-20020a056e020e0600b002eb5eb4f8f9sm1055584ilk.77.2022.10.07.09.56.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Oct 2022 09:56:45 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 4/4] eventpoll: add support for min-wait Date: Fri, 7 Oct 2022 10:56:37 -0600 Message-Id: <20221007165637.22374-5-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221007165637.22374-1-axboe@kernel.dk> References: <20221007165637.22374-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Rather than just have a timeout value for waiting on events, add EPOLL_CTL_MIN_WAIT to allow setting a minimum time that epoll_wait() should always wait for events to arrive. For medium workload efficiencies, some production workloads inject artificial timers or sleeps before calling epoll_wait() to get better batching and higher efficiencies. While this does help, it's not as efficient as it could be. By adding support for epoll_wait() for this directly, we can avoids extra context switches and scheduler and timer overhead. As an example, running an AB test on an identical workload at about ~370K reqs/second, without this change and with the sleep hack mentioned above (using 200 usec as the timeout), we're doing 310K-340K non-voluntary context switches per second. Idle CPU on the host is 27-34%. With the the sleep hack removed and epoll set to the same 200 usec value, we're handling the exact same load but at 292K-315k non-voluntary context switches and idle CPU of 33-41%, a substantial win. Basic test case: struct d { int p1, p2; }; static void *fn(void *data) { struct d *d = data; char b = 0x89; /* Generate 2 events 20 msec apart */ usleep(10000); write(d->p1, &b, sizeof(b)); usleep(10000); write(d->p2, &b, sizeof(b)); return NULL; } int main(int argc, char *argv[]) { struct epoll_event ev, events[2]; pthread_t thread; int p1[2], p2[2]; struct d d; int efd, ret; efd = epoll_create1(0); if (efd < 0) { perror("epoll_create"); return 1; } if (pipe(p1) < 0) { perror("pipe"); return 1; } if (pipe(p2) < 0) { perror("pipe"); return 1; } ev.events = EPOLLIN; ev.data.fd = p1[0]; if (epoll_ctl(efd, EPOLL_CTL_ADD, p1[0], &ev) < 0) { perror("epoll add"); return 1; } ev.events = EPOLLIN; ev.data.fd = p2[0]; if (epoll_ctl(efd, EPOLL_CTL_ADD, p2[0], &ev) < 0) { perror("epoll add"); return 1; } /* always wait 200 msec for events */ ev.data.u64 = 200000; if (epoll_ctl(efd, EPOLL_CTL_MIN_WAIT, -1, &ev) < 0) { perror("epoll add set timeout"); return 1; } d.p1 = p1[1]; d.p2 = p2[1]; pthread_create(&thread, NULL, fn, &d); /* expect to get 2 events here rather than just 1 */ ret = epoll_wait(efd, events, 2, -1); printf("epoll_wait=%d\n", ret); return 0; } Signed-off-by: Jens Axboe --- fs/eventpoll.c | 132 ++++++++++++++++++++++++++------- include/linux/eventpoll.h | 2 +- include/uapi/linux/eventpoll.h | 1 + 3 files changed, 109 insertions(+), 26 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 79aa61a951df..ccb8400e2252 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -39,6 +39,11 @@ #include #include +/* + * If a default min_wait timeout is desired, set this to non-zero. In usecs. + */ +#define EPOLL_DEF_MIN_WAIT 0 + /* * LOCKING: * There are three level of locking required by epoll : @@ -117,6 +122,9 @@ struct eppoll_entry { /* The "base" pointer is set to the container "struct epitem" */ struct epitem *base; + /* min wait time if (min_wait_ts) & 1 != 0 */ + ktime_t min_wait_ts; + /* * Wait queue item that will be linked to the target file wait * queue head. @@ -217,6 +225,9 @@ struct eventpoll { u64 gen; struct hlist_head refs; + /* min wait for epoll_wait() */ + unsigned int min_wait_ts; + #ifdef CONFIG_NET_RX_BUSY_POLL /* used to track busy poll napi_id */ unsigned int napi_id; @@ -953,6 +964,7 @@ static int ep_alloc(struct eventpoll **pep) ep->rbr = RB_ROOT_CACHED; ep->ovflist = EP_UNACTIVE_PTR; ep->user = user; + ep->min_wait_ts = EPOLL_DEF_MIN_WAIT; *pep = ep; @@ -1747,6 +1759,32 @@ static struct timespec64 *ep_timeout_to_timespec(struct timespec64 *to, long ms) return to; } +struct epoll_wq { + wait_queue_entry_t wait; + struct hrtimer timer; + ktime_t timeout_ts; + ktime_t min_wait_ts; + struct eventpoll *ep; + bool timed_out; + int maxevents; + int wakeups; +}; + +static bool ep_should_min_wait(struct epoll_wq *ewq) +{ + if (ewq->min_wait_ts & 1) { + /* just an approximation */ + if (++ewq->wakeups >= ewq->maxevents) + goto stop_wait; + if (ktime_before(ktime_get_ns(), ewq->min_wait_ts)) + return true; + } + +stop_wait: + ewq->min_wait_ts &= ~(u64) 1; + return false; +} + /* * autoremove_wake_function, but remove even on failure to wake up, because we * know that default_wake_function/ttwu will only fail if the thread is already @@ -1756,27 +1794,37 @@ static struct timespec64 *ep_timeout_to_timespec(struct timespec64 *to, long ms) static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, unsigned int mode, int sync, void *key) { - int ret = default_wake_function(wq_entry, mode, sync, key); + struct epoll_wq *ewq = container_of(wq_entry, struct epoll_wq, wait); + int ret; + + /* + * If min wait time hasn't been satisfied yet, keep waiting + */ + if (ep_should_min_wait(ewq)) + return 0; + ret = default_wake_function(wq_entry, mode, sync, key); list_del_init(&wq_entry->entry); return ret; } -struct epoll_wq { - wait_queue_entry_t wait; - struct hrtimer timer; - ktime_t timeout_ts; - bool timed_out; -}; - static enum hrtimer_restart ep_timer(struct hrtimer *timer) { struct epoll_wq *ewq = container_of(timer, struct epoll_wq, timer); struct task_struct *task = ewq->wait.private; + const bool is_min_wait = ewq->min_wait_ts & 1; + + if (!is_min_wait || ep_events_available(ewq->ep)) { + if (!is_min_wait) + ewq->timed_out = true; + ewq->min_wait_ts &= ~(u64) 1; + wake_up_process(task); + return HRTIMER_NORESTART; + } - ewq->timed_out = true; - wake_up_process(task); - return HRTIMER_NORESTART; + ewq->min_wait_ts &= ~(u64) 1; + hrtimer_set_expires_range_ns(&ewq->timer, ewq->timeout_ts, 0); + return HRTIMER_RESTART; } static void ep_schedule(struct eventpoll *ep, struct epoll_wq *ewq, ktime_t *to, @@ -1831,12 +1879,14 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, lockdep_assert_irqs_enabled(); + ewq.ep = ep; ewq.timed_out = false; + ewq.maxevents = maxevents; + ewq.wakeups = 0; if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { slack = select_estimate_accuracy(timeout); - to = &ewq.timeout_ts; - *to = timespec64_to_ktime(*timeout); + ewq.timeout_ts = timespec64_to_ktime(*timeout); } else if (timeout) { /* * Avoid the unnecessary trip to the wait queue loop, if the @@ -1845,6 +1895,21 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, ewq.timed_out = 1; } + /* + * If min_wait is set for this epoll instance, note the min_wait + * time. Ensure the lowest bit is set in ewq.min_wait_ts, that's + * the state bit for whether or not min_wait is enabled. + */ + if (ep->min_wait_ts) { + ewq.min_wait_ts = ktime_add_us(ktime_get_ns(), + ep->min_wait_ts); + ewq.min_wait_ts |= (u64) 1; + to = &ewq.min_wait_ts; + } else { + ewq.min_wait_ts = 0; + to = &ewq.timeout_ts; + } + /* * This call is racy: We may or may not see events that are being added * to the ready list under the lock (e.g., in IRQ callbacks). For cases @@ -1913,7 +1978,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * important. */ eavail = ep_events_available(ep); - if (!eavail) { + if (!eavail || ewq.min_wait_ts & 1) { __add_wait_queue_exclusive(&ep->wq, &ewq.wait); write_unlock_irq(&ep->lock); ep_schedule(ep, &ewq, to, slack); @@ -2111,6 +2176,31 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, if (!f.file) goto error_return; + /* + * We have to check that the file structure underneath the file + * descriptor the user passed to us _is_ an eventpoll file. + */ + error = -EINVAL; + if (!is_file_epoll(f.file)) + goto error_fput; + + /* + * At this point it is safe to assume that the "private_data" contains + * our own data structure. + */ + ep = f.file->private_data; + + /* + * Handle EPOLL_CTL_MIN_WAIT upfront as we don't need to care about + * the fd being passed in. + */ + if (op == EPOLL_CTL_MIN_WAIT) { + /* return old value */ + error = ep->min_wait_ts; + ep->min_wait_ts = epds->data; + goto error_fput; + } + /* Get the "struct file *" for the target file */ tf = fdget(fd); if (!tf.file) @@ -2126,12 +2216,10 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, ep_take_care_of_epollwakeup(epds); /* - * We have to check that the file structure underneath the file descriptor - * the user passed to us _is_ an eventpoll file. And also we do not permit - * adding an epoll file descriptor inside itself. + * We do not permit adding an epoll file descriptor inside itself. */ error = -EINVAL; - if (f.file == tf.file || !is_file_epoll(f.file)) + if (f.file == tf.file) goto error_tgt_fput; /* @@ -2147,12 +2235,6 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, goto error_tgt_fput; } - /* - * At this point it is safe to assume that the "private_data" contains - * our own data structure. - */ - ep = f.file->private_data; - /* * When we insert an epoll file descriptor inside another epoll file * descriptor, there is the chance of creating closed loops, which are @@ -2251,7 +2333,7 @@ SYSCALL_DEFINE4(epoll_ctl, int, epfd, int, op, int, fd, { struct epoll_event epds; - if (ep_op_has_event(op) && + if ((ep_op_has_event(op) || op == EPOLL_CTL_MIN_WAIT) && copy_from_user(&epds, event, sizeof(struct epoll_event))) return -EFAULT; diff --git a/include/linux/eventpoll.h b/include/linux/eventpoll.h index 3337745d81bd..cbef635cb7e4 100644 --- a/include/linux/eventpoll.h +++ b/include/linux/eventpoll.h @@ -59,7 +59,7 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, /* Tells if the epoll_ctl(2) operation needs an event copy from userspace */ static inline int ep_op_has_event(int op) { - return op != EPOLL_CTL_DEL; + return op != EPOLL_CTL_DEL && op != EPOLL_CTL_MIN_WAIT; } #else diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h index 8a3432d0f0dc..81ecb1ca36e0 100644 --- a/include/uapi/linux/eventpoll.h +++ b/include/uapi/linux/eventpoll.h @@ -26,6 +26,7 @@ #define EPOLL_CTL_ADD 1 #define EPOLL_CTL_DEL 2 #define EPOLL_CTL_MOD 3 +#define EPOLL_CTL_MIN_WAIT 4 /* Epoll event masks */ #define EPOLLIN (__force __poll_t)0x00000001