From patchwork Fri Oct 28 21:43:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13024332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E606ECAAA1 for ; Fri, 28 Oct 2022 21:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230008AbiJ1Vng (ORCPT ); Fri, 28 Oct 2022 17:43:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229996AbiJ1Vnd (ORCPT ); Fri, 28 Oct 2022 17:43:33 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2741124BA8A for ; Fri, 28 Oct 2022 14:43:31 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id b11so5739742pjp.2 for ; Fri, 28 Oct 2022 14:43:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8PHeEQvI03GO1u4arOs5eKGxrrWj+QcJXenB1+d+Ev4=; b=l/6HssGhPRExvh/FxdFaGlFZfok/tFw3IcJbJihU0Wuk8reK2iRnYuAKeLKryPObFk 8+myno8kH2k+Mx/mhpfG6lxbkp75xrFHJBDOoc0Hi+nCjGGX5XPhIzioZiLQHdYG7m+Z zH6qRIKT7+DBebh6PfSCHHRhFyJrVib7ADoS1iUOnw26Fx2qSxnkfWZ15xoHpPi+Cfkc uX4D3Uu/y7LbtcBERELE4v/eGWzbk+A791IJM5YqSq27H4KLResQNlvAYGSZxE6iTCWr J4Bj2quMAldC3LRqMQU5FPn24iu+tqiGbbAVHn1An+yZFSmKvu/tseANxOGiRShg+W+1 BCrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8PHeEQvI03GO1u4arOs5eKGxrrWj+QcJXenB1+d+Ev4=; b=A/c+wxgxZSpPAjNCNkUtfYCSJXIHU0yD1+Dwais/77euQelTwPewuJFaxdtloiRLdv c1q1yNMJQ79kOxvICbNo/2H7OA5nQO043G8SZWNNp17bNVkg9e0q1ZQz2tSindJG5zq0 qypoyJZNxunXUjW6PHRV7msYSeBbTSkGAWY9wL994uDTicxR8l+5KIIVkBfZaIZ6ByBw cnIjXe9xrNbXPfZHzg+U2ASJh1tokk63QsjuXHjUPDYeQFaiNlbm9b+1Rl+xwNC+l3WO f785Hu4G8ByH62RvJ/cX2EgZRDnNYLKR/F2yBanedLQ8Hk6MbC8PzrIM3Rd2oxb7mvFj z7SQ== X-Gm-Message-State: ACrzQf0TLdR9E+doUJrgQ2KA5fpQz/OEwB+bUUeInPJJ/28a7GO1JlRB /y5zIM2QLfBQ075JHm5R4EjS3ovRr7VsG2+k X-Google-Smtp-Source: AMsMyM4wS5hwn/wufRJz1ZKOMYhlqyGpKApC1sj47wA7IB3hiL7EojiiEEXM6wBR9vXHu4nvCzSA8w== X-Received: by 2002:a17:902:f685:b0:186:fa9c:2fdc with SMTP id l5-20020a170902f68500b00186fa9c2fdcmr1086126plg.25.1666993410580; Fri, 28 Oct 2022 14:43:30 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id u6-20020a17090a1d4600b002130c269b6fsm2993855pju.1.2022.10.28.14.43.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 14:43:30 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 1/5] eventpoll: cleanup branches around sleeping for events Date: Fri, 28 Oct 2022 15:43:21 -0600 Message-Id: <20221028214325.13496-2-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221028214325.13496-1-axboe@kernel.dk> References: <20221028214325.13496-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Rather than have two separate branches here, collapse them into a single one instead. No functional changes here, just a cleanup in preparation for changes in this area. Signed-off-by: Jens Axboe --- fs/eventpoll.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 52954d4637b5..3061bdde6cba 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1869,14 +1869,15 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * important. */ eavail = ep_events_available(ep); - if (!eavail) + if (!eavail) { __add_wait_queue_exclusive(&ep->wq, &wait); - - write_unlock_irq(&ep->lock); - - if (!eavail) + write_unlock_irq(&ep->lock); timed_out = !schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS); + } else { + write_unlock_irq(&ep->lock); + } + __set_current_state(TASK_RUNNING); /* From patchwork Fri Oct 28 21:43:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13024333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A756FC38A02 for ; Fri, 28 Oct 2022 21:43:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230035AbiJ1Vni (ORCPT ); Fri, 28 Oct 2022 17:43:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229995AbiJ1Vnc (ORCPT ); Fri, 28 Oct 2022 17:43:32 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B68524AE26 for ; Fri, 28 Oct 2022 14:43:32 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id t10-20020a17090a4e4a00b0020af4bcae10so5648543pjl.3 for ; Fri, 28 Oct 2022 14:43:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tiguh3u9eggshsoZh3GPe1CsJMkdP/JI4kBBBED1cmY=; b=hx1oQADEdyNFaX9tPTLZ7yu68JZNzS/ZRcV4/4ftjbbkrAqlVDlR59NVfME3EASL7q SmUZXZegPFn0vYGmx5BxprclqeaTTW4QnM2arglNSSFwKQrl1iodtZu8BXYqFEqnVuFi ljvLjzPtuZoTpUh1IE0dt0gDTqUuRuJZSvKmAldJaFcuvir511BxZioWBr8snZtpOmNU kGRxJs0HfTGxIVuRLAEQsgxVWerlzfBzFwlw1DdShu+GHUP2kGzSd8BQMNa4mjKlAhrd 4Xx36O4qXEWvcpXh4xVohKWyW6vQoZAyKJl8g8ZBtW4724mea8IKCS9fUeXxyXQ3S0AK SwRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tiguh3u9eggshsoZh3GPe1CsJMkdP/JI4kBBBED1cmY=; b=R8FoMSUJLPUFP+gkio1Ezwlcu+twMBsij6WU26R+XFSpk0j2S9Cq5Sx4uuyiUJNrGB 09sAswPWuRvdvznw1fhoYqIJ1SfrGV/LTLcqn5jvdLpYLnu2iyHY7I4phedwiYncik6B JIp7TLBbfvvyC7BudoJlnwxRpOWynrGmwjFwDYChKZnWNbYpEtBD592o1zX1kPTgohKl kbjELM4MoK2Xmf1VVupelzN1rJa0qByIErYnFeeORd32WvQR+A++qaqG66yNCe71S17q bRyPnro36tIQLkMvZ5MCftwGj1xAirfzGUXU2q+XWjBqJ5ohgwTRlgyHvS+Xqm9bosyV OWMg== X-Gm-Message-State: ACrzQf2WMoVqW3kl8LMYqwG1Qj7VwlPgZrbzWP45wEg33bNlZiJ5kmCQ vgZv6gT49ebbgT/LeLc3vMfmhjajC23VvF6l X-Google-Smtp-Source: AMsMyM5U4tIkoVbAxg2W2YcqNlaUDVW0F08OMYNTQCoxPHT++pYSP5jfFOuq12r/eqpi44eoAgcwGQ== X-Received: by 2002:a17:90a:4ece:b0:213:1130:ca9c with SMTP id v14-20020a17090a4ece00b002131130ca9cmr17984235pjl.17.1666993411637; Fri, 28 Oct 2022 14:43:31 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id u6-20020a17090a1d4600b002130c269b6fsm2993855pju.1.2022.10.28.14.43.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 14:43:31 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/5] eventpoll: split out wait handling Date: Fri, 28 Oct 2022 15:43:22 -0600 Message-Id: <20221028214325.13496-3-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221028214325.13496-1-axboe@kernel.dk> References: <20221028214325.13496-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In preparation for making changes to how wakeups and sleeps are done, move the timeout scheduling into a helper and manage it rather than rely on schedule_hrtimeout_range(). Signed-off-by: Jens Axboe --- fs/eventpoll.c | 70 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 56 insertions(+), 14 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 3061bdde6cba..f53bb4ec9e91 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1762,6 +1762,47 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, return ret; } +struct epoll_wq { + wait_queue_entry_t wait; + struct hrtimer timer; + bool timed_out; +}; + +static enum hrtimer_restart ep_timer(struct hrtimer *timer) +{ + struct epoll_wq *ewq = container_of(timer, struct epoll_wq, timer); + struct task_struct *task = ewq->wait.private; + + ewq->timed_out = true; + wake_up_process(task); + return HRTIMER_NORESTART; +} + +static void ep_schedule(struct eventpoll *ep, struct epoll_wq *ewq, ktime_t *to, + u64 slack) +{ + if (ewq->timed_out) + return; + if (to && *to == 0) { + ewq->timed_out = true; + return; + } + if (!to) { + schedule(); + return; + } + + hrtimer_init_on_stack(&ewq->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + ewq->timer.function = ep_timer; + hrtimer_set_expires_range_ns(&ewq->timer, *to, slack); + hrtimer_start_expires(&ewq->timer, HRTIMER_MODE_ABS); + + schedule(); + + hrtimer_cancel(&ewq->timer); + destroy_hrtimer_on_stack(&ewq->timer); +} + /** * ep_poll - Retrieves ready events, and delivers them to the caller-supplied * event buffer. @@ -1782,13 +1823,15 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, int maxevents, struct timespec64 *timeout) { - int res, eavail, timed_out = 0; + int res, eavail; u64 slack = 0; - wait_queue_entry_t wait; ktime_t expires, *to = NULL; + struct epoll_wq ewq; lockdep_assert_irqs_enabled(); + ewq.timed_out = false; + if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { slack = select_estimate_accuracy(timeout); to = &expires; @@ -1798,7 +1841,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * Avoid the unnecessary trip to the wait queue loop, if the * caller specified a non blocking operation. */ - timed_out = 1; + ewq.timed_out = 1; } /* @@ -1823,10 +1866,10 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, return res; } - if (timed_out) + if (ewq.timed_out) return 0; - eavail = ep_busy_loop(ep, timed_out); + eavail = ep_busy_loop(ep, ewq.timed_out); if (eavail) continue; @@ -1850,8 +1893,8 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * performance issue if a process is killed, causing all of its * threads to wake up without being removed normally. */ - init_wait(&wait); - wait.func = ep_autoremove_wake_function; + init_wait(&ewq.wait); + ewq.wait.func = ep_autoremove_wake_function; write_lock_irq(&ep->lock); /* @@ -1870,10 +1913,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, */ eavail = ep_events_available(ep); if (!eavail) { - __add_wait_queue_exclusive(&ep->wq, &wait); + __add_wait_queue_exclusive(&ep->wq, &ewq.wait); write_unlock_irq(&ep->lock); - timed_out = !schedule_hrtimeout_range(to, slack, - HRTIMER_MODE_ABS); + ep_schedule(ep, &ewq, to, slack); } else { write_unlock_irq(&ep->lock); } @@ -1887,7 +1929,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, */ eavail = 1; - if (!list_empty_careful(&wait.entry)) { + if (!list_empty_careful(&ewq.wait.entry)) { write_lock_irq(&ep->lock); /* * If the thread timed out and is not on the wait queue, @@ -1896,9 +1938,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * Thus, when wait.entry is empty, it needs to harvest * events. */ - if (timed_out) - eavail = list_empty(&wait.entry); - __remove_wait_queue(&ep->wq, &wait); + if (ewq.timed_out) + eavail = list_empty(&ewq.wait.entry); + __remove_wait_queue(&ep->wq, &ewq.wait); write_unlock_irq(&ep->lock); } } From patchwork Fri Oct 28 21:43:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13024334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E328C38A02 for ; Fri, 28 Oct 2022 21:43:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230070AbiJ1Vnj (ORCPT ); Fri, 28 Oct 2022 17:43:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229997AbiJ1Vnd (ORCPT ); Fri, 28 Oct 2022 17:43:33 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5487424BA83 for ; Fri, 28 Oct 2022 14:43:33 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id l6so5765648pjj.0 for ; Fri, 28 Oct 2022 14:43:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MWMQ1OOnzvPZ3+daCqGm09tO9BG/1DEjXdpVAErFn0Q=; b=TXcQvK4YM0rjh/gPGX+lSDLs/Ke3EOgKgIbxYBPbl5Bwlrr02MTRdmLdtc8qwoAkJM ZnTfMgGMHMSCWpxIVjkrwLMxMq7Aa5+X1Pw5Y0mYN4QS4pzHAUEb9rTS6GVTIGd9dnBw IEsmLKUjSwfELvRFwzBYFYfvUtbWTA7owJNYHUiYBArJRyJkWoP5yZp0qWjYQnpqE72K iIstp90EbFOvVeQZqEjtyiCUPVnAoOcZI/gUcdUuldBpo/+sb4MubpdLhHuez1mWbXOy e8nwkXwTzG56g5jJQZTktuB6nSknxKEWVF1c4LUCvz1ZBmeyPV3XQR7dc5pBLSjfIamR QWgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MWMQ1OOnzvPZ3+daCqGm09tO9BG/1DEjXdpVAErFn0Q=; b=TTZl4IKCW+giGivN3Rw4dxoTAFLfcfdRfRmgYUUltW0ZzUQb1BU8OPzUplVlxZJysU LrithKVPWi+VX8jBSyjMqVH6l6s8hKrZE1L1V2fsMCTBpNP+7x5CIPgXX02PUSiofRJD 8DKZx0pUN2dWD5SPuJVX6i9GiKJs08gkN1V+j+yXFjc8pDrs1+Abpedb0FnDQ/st3ImV m514SpJrTMR8O3EUh3NjuGs/FxvEIkrir10uxz/StV22ZX0DvtgILFXHJz8mZ8JNu8UY dYSChEaWxYdNTrrwcx/uDKPmzUj9BAlvI8jaR/IOa6UIjMvA+5EcRNJh7IgVp3DAAzja q9lQ== X-Gm-Message-State: ACrzQf2YpTXv3hpQNEZ8STHBrsqo70yKfuCD/NZ+v7ITgrTKkly0N60u ExIck880Y02VNqdOmdHpLnkekA== X-Google-Smtp-Source: AMsMyM4ALHXXyzTJZdtflatKsE5PnlDraTKb+GtfJUTBAvTNJQYFGW14m4j9O7FYbExuhd9Q4TUU6Q== X-Received: by 2002:a17:902:e54b:b0:186:5fba:13a5 with SMTP id n11-20020a170902e54b00b001865fba13a5mr1073947plf.173.1666993412746; Fri, 28 Oct 2022 14:43:32 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id u6-20020a17090a1d4600b002130c269b6fsm2993855pju.1.2022.10.28.14.43.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 14:43:32 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/5] eventpoll: move expires to epoll_wq Date: Fri, 28 Oct 2022 15:43:23 -0600 Message-Id: <20221028214325.13496-4-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221028214325.13496-1-axboe@kernel.dk> References: <20221028214325.13496-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This makes the expiration available to the wakeup handler. No functional changes expected in this patch, purely in preparation for being able to use the timeout on the wakeup side. Signed-off-by: Jens Axboe --- fs/eventpoll.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index f53bb4ec9e91..8b3c94ab7762 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1765,6 +1765,7 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, struct epoll_wq { wait_queue_entry_t wait; struct hrtimer timer; + ktime_t timeout_ts; bool timed_out; }; @@ -1825,7 +1826,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, { int res, eavail; u64 slack = 0; - ktime_t expires, *to = NULL; + ktime_t *to = NULL; struct epoll_wq ewq; lockdep_assert_irqs_enabled(); @@ -1834,7 +1835,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { slack = select_estimate_accuracy(timeout); - to = &expires; + to = &ewq.timeout_ts; *to = timespec64_to_ktime(*timeout); } else if (timeout) { /* From patchwork Fri Oct 28 21:43:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13024335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 981B4FA3740 for ; Fri, 28 Oct 2022 21:43:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230083AbiJ1Vnm (ORCPT ); Fri, 28 Oct 2022 17:43:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229635AbiJ1Vnf (ORCPT ); Fri, 28 Oct 2022 17:43:35 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BEF123B692 for ; Fri, 28 Oct 2022 14:43:34 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id f193so5962755pgc.0 for ; Fri, 28 Oct 2022 14:43:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oO43xlLma+tv8tmMXvsUkJytaNEXn7co0M840UITGjM=; b=acHh5cwgkAmQ3dtoHxW9mLeYEZVag9qz5bdERqpVL2WrAMRtFTbFR4LDwgRwGSBlIQ 5CjocgoZo+dqh/Kn1Gdf/IIsiyaMWIaedyvrwHGQnNYO+Boy5qfbvJJKmEoxO/GOR8/S wOHqfc4E0Q47fcdWM9poaiM9muj6DDd5I3qsbcUkrF5oL7CbaEgCZU2bWpRU/2OHKv7k UiYt0WpcrTEUbi9+89xe3KRybHGilPnNUEDqTL7C9nlv0GTP/eE41KOyezm2dNi93MZP hZgJcWKCT41ljDihKIHqSsW2tbgVuJrYN/0ceMa68ovP+M7Q+Z9AgCda4s7+zAV0nAVW +OPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oO43xlLma+tv8tmMXvsUkJytaNEXn7co0M840UITGjM=; b=tzRfDw9o0QbKOevCCOzDycZGu8MXg717LQjrsSyQIERXHqPQiDSZ5uZhmpSJXmyhIC 3O9tYHrw2Hj4NBSNo3aYqm7LWN4BRKs6BgYuelWSQP+sC2WJUAWWq23a+RIJMuG6vvXa jseH8tt/JamGzXO+LTeYlejhFVbDy/14z973jABQkhIWDZiFUhv+IJHLFl9tP54e2035 d5O5IpScegh8jOGvPhFx9bKWUwD2tzHo6lF2/TkvBGJsyCW06B5GEIj9VPYpIOpyYwFp eNLBv33rIW4V0zhdehtwV46Ty5476C7hRv0nSFF1i6K6sxmHhU0DtXtTxrB5SQnIGm/W ek0w== X-Gm-Message-State: ACrzQf2x9YaWOJ6etODeKups/cY59iZ+0mjMKY21BNwgYi0V7WYRQwm6 S1kTkhEFQWtEwyhfCzeA1SInLevVhRbV1Eje X-Google-Smtp-Source: AMsMyM6CSUMPOxAJEYIPNNzyIm9hIvlLDsy8NVe/negKifXte+R0ocElGTnFWZm7ke/e8wf1p32wJg== X-Received: by 2002:a63:d241:0:b0:43c:474c:c6c6 with SMTP id t1-20020a63d241000000b0043c474cc6c6mr1342605pgi.523.1666993413875; Fri, 28 Oct 2022 14:43:33 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id u6-20020a17090a1d4600b002130c269b6fsm2993855pju.1.2022.10.28.14.43.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 14:43:33 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 4/5] eventpoll: move file checking earlier for epoll_ctl() Date: Fri, 28 Oct 2022 15:43:24 -0600 Message-Id: <20221028214325.13496-5-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221028214325.13496-1-axboe@kernel.dk> References: <20221028214325.13496-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This just cleans up the checking a bit, in preparation for a change that will need access to 'ep' earlier. Signed-off-by: Jens Axboe --- fs/eventpoll.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 8b3c94ab7762..cd2138d02bda 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -2111,6 +2111,20 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, if (!f.file) goto error_return; + /* + * We have to check that the file structure underneath the file + * descriptor the user passed to us _is_ an eventpoll file. + */ + error = -EINVAL; + if (!is_file_epoll(f.file)) + goto error_fput; + + /* + * At this point it is safe to assume that the "private_data" contains + * our own data structure. + */ + ep = f.file->private_data; + /* Get the "struct file *" for the target file */ tf = fdget(fd); if (!tf.file) @@ -2126,12 +2140,10 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, ep_take_care_of_epollwakeup(epds); /* - * We have to check that the file structure underneath the file descriptor - * the user passed to us _is_ an eventpoll file. And also we do not permit - * adding an epoll file descriptor inside itself. + * We do not permit adding an epoll file descriptor inside itself. */ error = -EINVAL; - if (f.file == tf.file || !is_file_epoll(f.file)) + if (f.file == tf.file) goto error_tgt_fput; /* @@ -2147,12 +2159,6 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, goto error_tgt_fput; } - /* - * At this point it is safe to assume that the "private_data" contains - * our own data structure. - */ - ep = f.file->private_data; - /* * When we insert an epoll file descriptor inside another epoll file * descriptor, there is the chance of creating closed loops, which are From patchwork Fri Oct 28 21:43:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13024336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D0D0C38A02 for ; Fri, 28 Oct 2022 21:43:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230047AbiJ1Vnp (ORCPT ); Fri, 28 Oct 2022 17:43:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230049AbiJ1Vng (ORCPT ); Fri, 28 Oct 2022 17:43:36 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F70924BA84 for ; Fri, 28 Oct 2022 14:43:35 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id h14so5725345pjv.4 for ; Fri, 28 Oct 2022 14:43:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2NM4YfZaBNFzd1+6iQahn7Cz4qt0108M3OXrThmpXB0=; b=QSevLRS9OTiO72mHM8DveErxfnY+PPoTQqAemtA7ZmP7zXeOfUkue/zXHJ8/mkdf0m sXP4l2vc/RUP6ag7sC0cZVDBjKTWfOMOdb8zhLO+kQ+58pJPfBkkDKUlAI9KsZN5MNzD AWVJfLl1GYoZJrFJCyW2UjuBirgE9HkkKejDzGQ8Ks9YfPMiZm62NK7f6LcuARcx0NUC aojtrmB9RMAf0YVCzztXWbarybI2q8PTtvsHH2nluNW8M8IbNGqbXW/r2R/npCK4kwA4 9a3OW6U/OJalaH0ZSbjkGoArC/dCB1UWBi5VTLtUNwt4/5m4Nxd8OcKP4KcyUC5nUqk0 JhTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2NM4YfZaBNFzd1+6iQahn7Cz4qt0108M3OXrThmpXB0=; b=f/AqYSoCneJ0b1upteD1L6yRJCyzTry3GrX7kGDAGhBFfzAJKea7v4rA/c9pz251kw ZZqTHzZXpc+0wgmvjZfGu88Vnm7ATVWW+fMJ6ldVVx6QM2uKrM4+Kc5TQJBnjwJT9xfF awDfq/uLKbN+gBd4d+wiLVZvC713OTxStHkcZBR1mr8xuwiHQIuw8KZcJ7fYQTdtpo/g 6gXoyi5KlgiEHMWEJmtR+9kp8rCijG7oJ/rGsMUpgEv3YlA9NLwsg4Pj5eExsw79rlJt v/yJn3y8dF/Y9zHBUFs6evwlx+DWLOkmIn+e03L6Rq1NXeVPmt5SX9EZWNT/1rjB0fpF 4AgQ== X-Gm-Message-State: ACrzQf2DPyTj2jpFNWRdctagbO7hrhqRlmzNO2BmaZuR4DmGuPXX/vlU 2ngSgEvI/sFifx8YiRTgGuCGWA== X-Google-Smtp-Source: AMsMyM6LQH2gxKLwOMt+h2s5d9RZvAV6b+cGrRjw9AcZ9pW1VpZyCPZ73M780Aw8h/mrCYvFXtE2DA== X-Received: by 2002:a17:90b:1d0f:b0:20d:1ec3:f732 with SMTP id on15-20020a17090b1d0f00b0020d1ec3f732mr1385609pjb.84.1666993415000; Fri, 28 Oct 2022 14:43:35 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id u6-20020a17090a1d4600b002130c269b6fsm2993855pju.1.2022.10.28.14.43.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Oct 2022 14:43:34 -0700 (PDT) From: Jens Axboe To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 5/5] eventpoll: add support for min-wait Date: Fri, 28 Oct 2022 15:43:25 -0600 Message-Id: <20221028214325.13496-6-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221028214325.13496-1-axboe@kernel.dk> References: <20221028214325.13496-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Rather than just have a timeout value for waiting on events, add EPOLL_CTL_MIN_WAIT to allow setting a minimum time that epoll_wait() should always wait for events to arrive. For medium workload efficiencies, some production workloads inject artificial timers or sleeps before calling epoll_wait() to get better batching and higher efficiencies. While this does help, it's not as efficient as it could be. By adding support for epoll_wait() for this directly, we can avoids extra context switches and scheduler and timer overhead. As an example, running an AB test on an identical workload at about ~370K reqs/second, without this change and with the sleep hack mentioned above (using 200 usec as the timeout), we're doing 310K-340K non-voluntary context switches per second. Idle CPU on the host is 27-34%. With the the sleep hack removed and epoll set to the same 200 usec value, we're handling the exact same load but at 292K-315k non-voluntary context switches and idle CPU of 33-41%, a substantial win. Basic test case: struct d { int p1, p2; }; static void *fn(void *data) { struct d *d = data; char b = 0x89; /* Generate 2 events 20 msec apart */ usleep(10000); write(d->p1, &b, sizeof(b)); usleep(10000); write(d->p2, &b, sizeof(b)); return NULL; } int main(int argc, char *argv[]) { struct epoll_event ev, events[2]; pthread_t thread; int p1[2], p2[2]; struct d d; int efd, ret; efd = epoll_create1(0); if (efd < 0) { perror("epoll_create"); return 1; } if (pipe(p1) < 0) { perror("pipe"); return 1; } if (pipe(p2) < 0) { perror("pipe"); return 1; } ev.events = EPOLLIN; ev.data.fd = p1[0]; if (epoll_ctl(efd, EPOLL_CTL_ADD, p1[0], &ev) < 0) { perror("epoll add"); return 1; } ev.events = EPOLLIN; ev.data.fd = p2[0]; if (epoll_ctl(efd, EPOLL_CTL_ADD, p2[0], &ev) < 0) { perror("epoll add"); return 1; } /* always wait 200 msec for events */ ev.data.u64 = 200000; if (epoll_ctl(efd, EPOLL_CTL_MIN_WAIT, -1, &ev) < 0) { perror("epoll add set timeout"); return 1; } d.p1 = p1[1]; d.p2 = p2[1]; pthread_create(&thread, NULL, fn, &d); /* expect to get 2 events here rather than just 1 */ ret = epoll_wait(efd, events, 2, -1); printf("epoll_wait=%d\n", ret); return 0; } Signed-off-by: Jens Axboe --- fs/eventpoll.c | 100 ++++++++++++++++++++++++++++----- include/linux/eventpoll.h | 2 +- include/uapi/linux/eventpoll.h | 1 + 3 files changed, 87 insertions(+), 16 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index cd2138d02bda..828e2b9771d6 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -117,6 +117,9 @@ struct eppoll_entry { /* The "base" pointer is set to the container "struct epitem" */ struct epitem *base; + /* min wait time if (min_wait_ts) & 1 != 0 */ + ktime_t min_wait_ts; + /* * Wait queue item that will be linked to the target file wait * queue head. @@ -217,6 +220,9 @@ struct eventpoll { u64 gen; struct hlist_head refs; + /* min wait for epoll_wait() */ + unsigned int min_wait_ts; + #ifdef CONFIG_NET_RX_BUSY_POLL /* used to track busy poll napi_id */ unsigned int napi_id; @@ -1747,6 +1753,32 @@ static struct timespec64 *ep_timeout_to_timespec(struct timespec64 *to, long ms) return to; } +struct epoll_wq { + wait_queue_entry_t wait; + struct hrtimer timer; + ktime_t timeout_ts; + ktime_t min_wait_ts; + struct eventpoll *ep; + bool timed_out; + int maxevents; + int wakeups; +}; + +static bool ep_should_min_wait(struct epoll_wq *ewq) +{ + if (ewq->min_wait_ts & 1) { + /* just an approximation */ + if (++ewq->wakeups >= ewq->maxevents) + goto stop_wait; + if (ktime_before(ktime_get_ns(), ewq->min_wait_ts)) + return true; + } + +stop_wait: + ewq->min_wait_ts &= ~(u64) 1; + return false; +} + /* * autoremove_wake_function, but remove even on failure to wake up, because we * know that default_wake_function/ttwu will only fail if the thread is already @@ -1756,27 +1788,37 @@ static struct timespec64 *ep_timeout_to_timespec(struct timespec64 *to, long ms) static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry, unsigned int mode, int sync, void *key) { - int ret = default_wake_function(wq_entry, mode, sync, key); + struct epoll_wq *ewq = container_of(wq_entry, struct epoll_wq, wait); + int ret; + + /* + * If min wait time hasn't been satisfied yet, keep waiting + */ + if (ep_should_min_wait(ewq)) + return 0; + ret = default_wake_function(wq_entry, mode, sync, key); list_del_init(&wq_entry->entry); return ret; } -struct epoll_wq { - wait_queue_entry_t wait; - struct hrtimer timer; - ktime_t timeout_ts; - bool timed_out; -}; - static enum hrtimer_restart ep_timer(struct hrtimer *timer) { struct epoll_wq *ewq = container_of(timer, struct epoll_wq, timer); struct task_struct *task = ewq->wait.private; + const bool is_min_wait = ewq->min_wait_ts & 1; + + if (!is_min_wait || ep_events_available(ewq->ep)) { + if (!is_min_wait) + ewq->timed_out = true; + ewq->min_wait_ts &= ~(u64) 1; + wake_up_process(task); + return HRTIMER_NORESTART; + } - ewq->timed_out = true; - wake_up_process(task); - return HRTIMER_NORESTART; + ewq->min_wait_ts &= ~(u64) 1; + hrtimer_set_expires_range_ns(&ewq->timer, ewq->timeout_ts, 0); + return HRTIMER_RESTART; } static void ep_schedule(struct eventpoll *ep, struct epoll_wq *ewq, ktime_t *to, @@ -1831,12 +1873,14 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, lockdep_assert_irqs_enabled(); + ewq.ep = ep; ewq.timed_out = false; + ewq.maxevents = maxevents; + ewq.wakeups = 0; if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { slack = select_estimate_accuracy(timeout); - to = &ewq.timeout_ts; - *to = timespec64_to_ktime(*timeout); + ewq.timeout_ts = timespec64_to_ktime(*timeout); } else if (timeout) { /* * Avoid the unnecessary trip to the wait queue loop, if the @@ -1845,6 +1889,21 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, ewq.timed_out = 1; } + /* + * If min_wait is set for this epoll instance, note the min_wait + * time. Ensure the lowest bit is set in ewq.min_wait_ts, that's + * the state bit for whether or not min_wait is enabled. + */ + if (ep->min_wait_ts) { + ewq.min_wait_ts = ktime_add_us(ktime_get_ns(), + ep->min_wait_ts); + ewq.min_wait_ts |= (u64) 1; + to = &ewq.min_wait_ts; + } else { + ewq.min_wait_ts = 0; + to = &ewq.timeout_ts; + } + /* * This call is racy: We may or may not see events that are being added * to the ready list under the lock (e.g., in IRQ callbacks). For cases @@ -1913,7 +1972,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, * important. */ eavail = ep_events_available(ep); - if (!eavail) { + if (!eavail || ewq.min_wait_ts & 1) { __add_wait_queue_exclusive(&ep->wq, &ewq.wait); write_unlock_irq(&ep->lock); ep_schedule(ep, &ewq, to, slack); @@ -2125,6 +2184,17 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, */ ep = f.file->private_data; + /* + * Handle EPOLL_CTL_MIN_WAIT upfront as we don't need to care about + * the fd being passed in. + */ + if (op == EPOLL_CTL_MIN_WAIT) { + /* return old value */ + error = ep->min_wait_ts; + ep->min_wait_ts = epds->data; + goto error_fput; + } + /* Get the "struct file *" for the target file */ tf = fdget(fd); if (!tf.file) @@ -2257,7 +2327,7 @@ SYSCALL_DEFINE4(epoll_ctl, int, epfd, int, op, int, fd, { struct epoll_event epds; - if (ep_op_has_event(op) && + if ((ep_op_has_event(op) || op == EPOLL_CTL_MIN_WAIT) && copy_from_user(&epds, event, sizeof(struct epoll_event))) return -EFAULT; diff --git a/include/linux/eventpoll.h b/include/linux/eventpoll.h index 3337745d81bd..cbef635cb7e4 100644 --- a/include/linux/eventpoll.h +++ b/include/linux/eventpoll.h @@ -59,7 +59,7 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds, /* Tells if the epoll_ctl(2) operation needs an event copy from userspace */ static inline int ep_op_has_event(int op) { - return op != EPOLL_CTL_DEL; + return op != EPOLL_CTL_DEL && op != EPOLL_CTL_MIN_WAIT; } #else diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h index 8a3432d0f0dc..81ecb1ca36e0 100644 --- a/include/uapi/linux/eventpoll.h +++ b/include/uapi/linux/eventpoll.h @@ -26,6 +26,7 @@ #define EPOLL_CTL_ADD 1 #define EPOLL_CTL_DEL 2 #define EPOLL_CTL_MOD 3 +#define EPOLL_CTL_MIN_WAIT 4 /* Epoll event masks */ #define EPOLLIN (__force __poll_t)0x00000001