From patchwork Sun May 24 19:22:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 11567611 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C89F560D for ; Sun, 24 May 2020 19:22:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 95C2C20787 for ; Sun, 24 May 2020 19:22:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="gwUf5uPg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95C2C20787 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5414980015; Sun, 24 May 2020 15:22:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4549A80007; Sun, 24 May 2020 15:22:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DFE280015; Sun, 24 May 2020 15:22:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 04F4A80007 for ; Sun, 24 May 2020 15:22:30 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B3C304DAB for ; Sun, 24 May 2020 19:22:29 +0000 (UTC) X-FDA: 76852584018.11.egg58_15bd729767a33 X-Spam-Summary: 2,0,0,3e74d20f9a6c3f63,d41d8cd98f00b204,axboe@kernel.dk,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:4118:4321:5007:6119:6261:6653:7903:8603:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13161:13229:13894:14181:14394:14721:21080:21324:21444:21627:21740:21990:30054:30070,0,RBL:209.85.215.196:@kernel.dk:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: egg58_15bd729767a33 X-Filterd-Recvd-Size: 7475 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Sun, 24 May 2020 19:22:29 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id u5so7784813pgn.5 for ; Sun, 24 May 2020 12:22:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kF6sqIhK1kfWOOpnyDDaJ/ad9nyWKxvOfw+GwZJVlBw=; b=gwUf5uPgRtpjmEmdekl/X9t1p9KbSX8a5FZv451K+HmJ+E2Dxys7Iv+FahH6uXL921 Rl0Y5qaKfAJ0eZcvKH/3dgpG1oRJgXwwEQ7iU6vxiUjCHxy0LS/4Ic8jVU2Eu3LjGmN+ rldYhIDm4b83DiavFv7abtxG5uSAqnGPy1LA2qmghBtOgoOmXtFwoiT5QoL4RP+GyKi5 1QI5nAo9rFdzTXerS1E2gsghoGegqw49q0HHU6kI2nExtUVM3eLuYmzJ1n/wMbwh8yfT eVLUkBm32Dohxe4j71zdA/Dwmp7n/74hzA14Gx+QYz6Ec4eNIJvrlpSZtpUFZSxOOiyo U7Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kF6sqIhK1kfWOOpnyDDaJ/ad9nyWKxvOfw+GwZJVlBw=; b=ej+X+yje15SmjXkaY92aO2wxW6SfIlXqi5NssHRJmL/LD7FHp0k3sbp8oqUZpC7HIS QxPx7EU2oHAuazIvaonwGrPcTGgJG/XC0g8xKywH/BVTLxov1w/iZbNhN1Wq9bxltfKl PFt0Tmqg5AJqVJps2stPLQzaxXBnOfXSK/ffsPQViLZwNp4wpiV4AJU2tUwDQXKhCiGl PEHelIvGOj/Mhb9wnaRibyxfklmorMXVf+y7CgWH4E9Tty0qDr7Y6nJzxp9wj31j0xz+ ZBF/SE7GCVW5aE4ptJqf43gUfgOWb+9rFMd0E87vUqaXk+QdL1Zt+noMbmgp/fHaKk73 qwJw== X-Gm-Message-State: AOAM530FdbCKMecgWj6qNq1gJJuqVDoeUftDnamgNl4AD6rVg4pvJUa1 zIiSVGdqMqOiV1vFSz097KxiBg== X-Google-Smtp-Source: ABdhPJw949rfc/hqYgfl8eQoUT5vJ4BjzwrkpBKSe+ivpeT8GSGdsUzSxxCDQRY+v8G1lA3nTvPojA== X-Received: by 2002:a63:5b0e:: with SMTP id p14mr2275730pgb.43.1590348148417; Sun, 24 May 2020 12:22:28 -0700 (PDT) Received: from x1.lan ([2605:e000:100e:8c61:c871:e701:52fa:2107]) by smtp.gmail.com with ESMTPSA id t21sm10312426pgu.39.2020.05.24.12.22.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 May 2020 12:22:27 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jens Axboe Subject: [PATCH 12/12] io_uring: support true async buffered reads, if file provides it Date: Sun, 24 May 2020 13:22:06 -0600 Message-Id: <20200524192206.4093-13-axboe@kernel.dk> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200524192206.4093-1-axboe@kernel.dk> References: <20200524192206.4093-1-axboe@kernel.dk> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the file is flagged with FMODE_BUF_RASYNC, then we don't have to punt the buffered read to an io-wq worker. Instead we can rely on page unlocking callbacks to support retry based async IO. This is a lot more efficient than doing async thread offload. The retry is done similarly to how we handle poll based retry. From the unlock callback, we simply queue the retry to a task_work based handler. Signed-off-by: Jens Axboe --- fs/io_uring.c | 112 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 112 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index e95481c552ff..23073857239c 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -498,6 +498,8 @@ struct io_async_rw { struct iovec *iov; ssize_t nr_segs; ssize_t size; + struct wait_page_queue wpq; + struct callback_head task_work; }; struct io_async_ctx { @@ -2568,6 +2570,112 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe, return 0; } +static void io_async_buf_cancel(struct callback_head *cb) +{ + struct io_async_rw *rw; + struct io_ring_ctx *ctx; + struct io_kiocb *req; + + rw = container_of(cb, struct io_async_rw, task_work); + req = rw->wpq.wait.private; + ctx = req->ctx; + + spin_lock_irq(&ctx->completion_lock); + io_cqring_fill_event(req, -ECANCELED); + io_commit_cqring(ctx); + spin_unlock_irq(&ctx->completion_lock); + + io_cqring_ev_posted(ctx); + req_set_fail_links(req); + io_double_put_req(req); +} + +static void io_async_buf_retry(struct callback_head *cb) +{ + struct io_async_rw *rw; + struct io_ring_ctx *ctx; + struct io_kiocb *req; + + rw = container_of(cb, struct io_async_rw, task_work); + req = rw->wpq.wait.private; + ctx = req->ctx; + + __set_current_state(TASK_RUNNING); + mutex_lock(&ctx->uring_lock); + __io_queue_sqe(req, NULL); + mutex_unlock(&ctx->uring_lock); +} + +static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode, + int sync, void *arg) +{ + struct wait_page_queue *wpq; + struct io_kiocb *req = wait->private; + struct io_async_rw *rw = &req->io->rw; + struct wait_page_key *key = arg; + struct task_struct *tsk; + int ret; + + wpq = container_of(wait, struct wait_page_queue, wait); + + ret = wake_page_match(wpq, key); + if (ret != 1) + return ret; + + list_del_init(&wait->entry); + + init_task_work(&rw->task_work, io_async_buf_retry); + /* submit ref gets dropped, acquire a new one */ + refcount_inc(&req->refs); + tsk = req->task; + ret = task_work_add(tsk, &rw->task_work, true); + if (unlikely(ret)) { + /* queue just for cancelation */ + init_task_work(&rw->task_work, io_async_buf_cancel); + tsk = io_wq_get_task(req->ctx->io_wq); + task_work_add(tsk, &rw->task_work, true); + } + wake_up_process(tsk); + return 1; +} + +static bool io_rw_should_retry(struct io_kiocb *req) +{ + struct kiocb *kiocb = &req->rw.kiocb; + int ret; + + /* never retry for NOWAIT, we just complete with -EAGAIN */ + if (req->flags & REQ_F_NOWAIT) + return false; + + /* already tried, or we're doing O_DIRECT */ + if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_WAITQ)) + return false; + /* + * just use poll if we can, and don't attempt if the fs doesn't + * support callback based unlocks + */ + if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC)) + return false; + + /* + * If request type doesn't require req->io to defer in general, + * we need to allocate it here + */ + if (!req->io && __io_alloc_async_ctx(req)) + return false; + + ret = kiocb_wait_page_queue_init(kiocb, &req->io->rw.wpq, + io_async_buf_func, req); + if (!ret) { + get_task_struct(current); + req->task = current; + return true; + } + + return false; +} + static int io_read(struct io_kiocb *req, bool force_nonblock) { struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; @@ -2601,6 +2709,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock) if (!ret) { ssize_t ret2; +retry: if (req->file->f_op->read_iter) ret2 = call_read_iter(req->file, kiocb, &iter); else @@ -2619,6 +2728,9 @@ static int io_read(struct io_kiocb *req, bool force_nonblock) if (!(req->flags & REQ_F_NOWAIT) && !file_can_poll(req->file)) req->flags |= REQ_F_MUST_PUNT; + if (io_rw_should_retry(req)) + goto retry; + kiocb->ki_flags &= ~IOCB_WAITQ; return -EAGAIN; } }