From patchwork Tue Nov 5 23:04:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 11228925 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3281D16B1 for ; Tue, 5 Nov 2019 23:05:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EAE2621929 for ; Tue, 5 Nov 2019 23:05:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="vgyoaa7J" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730512AbfKEXFb (ORCPT ); Tue, 5 Nov 2019 18:05:31 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:37765 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728515AbfKEXFb (ORCPT ); Tue, 5 Nov 2019 18:05:31 -0500 Received: by mail-wm1-f66.google.com with SMTP id q130so1227225wme.2; Tue, 05 Nov 2019 15:05:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=K4HSlR1svrgFdOVbEL5IMlVxcn9RvX0EQvAJKCf+mao=; b=vgyoaa7JayR4PIKIgFbd98uvjj33uMDx4kfaJyb5sV8rbxf+FuLfPQtDhTkQ+axE3X z4vdjTopwuRFh57a1Q0kqMRtEd+FuhD1aLVDwpcbUwn2AnbgeOOdysP7PFbB4tCPGa53 WVDBBQqL0yHqF15q2DDNigAhBeAnPKty72GqHrA1A4mvyA9fwjJtMQrbBzs53Z5mbJ58 Ohb9+qY2mI81i56J6bHdN6mgrj9tCAZVf6xC02PDj6IZjx/Br0DrgVtVfkjC3jN3M+Ja 4AYLc0Eupw8N0PSJ0WD0LzpDpLpTBO1oR11wkvIBrSjWDzc+zdLswnY3Rab/N9WQcG0t LEGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=K4HSlR1svrgFdOVbEL5IMlVxcn9RvX0EQvAJKCf+mao=; b=iLDPVi1HY1I3bZEyzdKdPe6/vuAITIC/XwN78c93seqnVNBA9qllqtTBC4KdFFsOUE FQcSyaHZpq/mUY9S9/xcxCa3lRpWswvKZu0Uj3IOpfMO6RHOHlVJF1RZz1lfiGJdskGe 2fCRyKukqpj0SQD4dHLjweus4ix9wpBzgK0vLgUf/Ncc5xlr/X8xz859VuaU1aIwcnnE /Cw2r5+Rr30qV0f1iabmwLuHpsQdu5rakucqiD+Q/Z6Busp7vAyCDVlYFg/5k2k3FkUN gx204st5XvfiXXGSJBQJvBdsWXv1tjov8OjEPQ6OAwJd9zjsgcU0GP8cOv/6uBaFplvs +COA== X-Gm-Message-State: APjAAAV6jG8y1QSLuSKivPL6bc21rG9/oqeyDLgT4pntbyfqLQk9L+lO 2TG+sjcrGd1OG08A+tnRcyQ= X-Google-Smtp-Source: APXvYqxw1JGo7f/Pa83tGSFcGvadlSS5VcQFLclbIwgkQ0Mx2FJcjZMNq+UsCxpn9t6rt01fMy9FEw== X-Received: by 2002:a05:600c:3cf:: with SMTP id z15mr1208530wmd.76.1572995128729; Tue, 05 Nov 2019 15:05:28 -0800 (PST) Received: from localhost.localdomain ([109.126.129.81]) by smtp.gmail.com with ESMTPSA id x8sm15579658wrm.7.2019.11.05.15.05.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Nov 2019 15:05:28 -0800 (PST) From: Pavel Begunkov To: Jens Axboe , io-uring@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 1/3] io_uring: allocate io_kiocb upfront Date: Wed, 6 Nov 2019 02:04:43 +0300 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Preparation patch. Make io_submit_sqes() to allocate io_kiocb, and then pass it further. Another difference is that it's allocated before we get an sqe. Signed-off-by: Pavel Begunkov --- fs/io_uring.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 82c2da99cb5c..920ad731db01 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2538,30 +2538,23 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, #define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK) -static void io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s, - struct io_submit_state *state, struct io_kiocb **link) +static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, + struct sqe_submit *s, struct io_submit_state *state, + struct io_kiocb **link) { struct io_uring_sqe *sqe_copy; - struct io_kiocb *req; int ret; /* enforce forwards compatibility on users */ if (unlikely(s->sqe->flags & ~SQE_VALID_FLAGS)) { ret = -EINVAL; - goto err; - } - - req = io_get_req(ctx, state); - if (unlikely(!req)) { - ret = -EAGAIN; - goto err; + goto err_req; } ret = io_req_set_file(ctx, s, state, req); if (unlikely(ret)) { err_req: io_free_req(req, NULL); -err: io_cqring_add_event(ctx, s->sqe->user_data, ret); return; } @@ -2697,9 +2690,15 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, for (i = 0; i < nr; i++) { struct sqe_submit s; + struct io_kiocb *req; - if (!io_get_sqring(ctx, &s)) + req = io_get_req(ctx, statep); + if (unlikely(!req)) break; + if (!io_get_sqring(ctx, &s)) { + __io_free_req(req); + break; + } if (io_sqe_needs_user(s.sqe) && !*mm) { mm_fault = mm_fault || !mmget_not_zero(ctx->sqo_mm); @@ -2727,7 +2726,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, s.in_async = async; s.needs_fixed_file = async; trace_io_uring_submit_sqe(ctx, s.sqe->user_data, true, async); - io_submit_sqe(ctx, &s, statep, &link); + io_submit_sqe(ctx, req, &s, statep, &link); submitted++; /* From patchwork Tue Nov 5 23:04:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 11228927 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F17F816B1 for ; Tue, 5 Nov 2019 23:05:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B644C21929 for ; Tue, 5 Nov 2019 23:05:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GXDxu4sD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728515AbfKEXFd (ORCPT ); Tue, 5 Nov 2019 18:05:33 -0500 Received: from mail-wr1-f68.google.com ([209.85.221.68]:46932 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730274AbfKEXFd (ORCPT ); Tue, 5 Nov 2019 18:05:33 -0500 Received: by mail-wr1-f68.google.com with SMTP id b3so17715408wrs.13; Tue, 05 Nov 2019 15:05:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=tjXRi0nTPEcOLYK+DCsYCdl0V0t7lT7GLBF5vMkKjy8=; b=GXDxu4sDa8vOTm9elfLLl5qq/l5aR5L+NVnikbDcUMw0Q1Fs3TWXzgpJztlLIYx0vZ Mssw+WdMm6K6d1386dKtoxK66A/5ekj7qN/JE7RykZtxdAHdMtzpAThLdEoxYm6Qw34Q 15YxqxeXDFnkUvgKVItd0JznsiKd6m00JfBnRScsNwtpXKyD3WaCMVCTq5Ai8wTy+NcX j1QU83wxX1prKy8jofNjFuEdXchXRXL7GYyKDs8m2aTSmcu9ZFrSKJxqODrjJKhmTzRh Nm/ch69vDGfSrHt2mpy8PJtGi2zQXTdfhLP53QETfybn4nya4IgqxJwRhbohic43GmSj lCwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tjXRi0nTPEcOLYK+DCsYCdl0V0t7lT7GLBF5vMkKjy8=; b=Lyc/vp77ZOHSJU+vORx+lo7mds17RZFH8+uzshwZ7ukOd3CJ6Us+QeLzE7whklxaGh 7I5WXuVxWr/IM+jB4ELoOrznEsERMpr8ZyAdVWMZGvGMzbuwBObO8zNePL+Jga4iptUx GezS28ro9oU2nKc7EErtPMcLmB3QBUPWU3wl6dH+waOx3oR+S8tAJ0GbdX/RY7Tk98M+ DQpS7ekblxIuR6/ZNBfNMGSBIK8bdOm2bILs4t70PJ+QqZApRi//efYqTrkrprp5r0s5 24prEdy2ya5rRntSCLTWUh16TzgKzu10b4H2laj0R1nbTr4VnGYNyKr3Kj9qqbWL1AOi 0epA== X-Gm-Message-State: APjAAAVJ73ysYx7jc2To2/FcFYzTT24HODqsA5qOvt5aggkwF3p9ojKy wHtmNGYOxkXl1j3MV6Oe3tbapWdV X-Google-Smtp-Source: APXvYqx/NAx0pRtC/UP/f8z9JQRGuxCZKef6/nYxdRI1Mq8My+S7eIkENUA3cgsIOGRlUkNZpcGI+g== X-Received: by 2002:a5d:62cf:: with SMTP id o15mr12884246wrv.7.1572995129843; Tue, 05 Nov 2019 15:05:29 -0800 (PST) Received: from localhost.localdomain ([109.126.129.81]) by smtp.gmail.com with ESMTPSA id x8sm15579658wrm.7.2019.11.05.15.05.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Nov 2019 15:05:29 -0800 (PST) From: Pavel Begunkov To: Jens Axboe , io-uring@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 2/3] io_uring: Use submit info inlined into req Date: Wed, 6 Nov 2019 02:04:44 +0300 Message-Id: <32cc59cefc848ba2e258fc4581684f1c2e67d649.1572993994.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Stack allocated struct sqe_submit is passed down to the submission path along with a request (a.k.a. struct io_kiocb), and will be copied into req->submit for async requests. As space for it is already allocated, fill req->submit in the first place instead of using on-stack one. As a result: 1. sqe->submit is the only place for sqe_submit and is always valid, so we don't need to track which one to use. 2. don't need to copy in case of async 3. allows to simplify the code by not carrying it as an argument all the way down 4. allows to reduce number of function arguments / potentially improve spilling The downside is that stack is most probably be cached, that's not true for just allocated memory for a request. Another concern is cache pollution. Though, a request would be touched and fetched along with req->submit at some point anyway, so shouldn't be a problem. Signed-off-by: Pavel Begunkov --- fs/io_uring.c | 29 +++++++++++++---------------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 920ad731db01..ecb5a4336389 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2443,7 +2443,6 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, sqe_copy = kmemdup(s->sqe, sizeof(*sqe_copy), GFP_KERNEL); if (sqe_copy) { s->sqe = sqe_copy; - memcpy(&req->submit, s, sizeof(*s)); if (req->work.flags & IO_WQ_WORK_NEEDS_FILES) { ret = io_grab_files(ctx, req); if (ret) { @@ -2578,13 +2577,11 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, } s->sqe = sqe_copy; - memcpy(&req->submit, s, sizeof(*s)); trace_io_uring_link(ctx, req, prev); list_add_tail(&req->list, &prev->link_list); } else if (s->sqe->flags & IOSQE_IO_LINK) { req->flags |= REQ_F_LINK; - memcpy(&req->submit, s, sizeof(*s)); INIT_LIST_HEAD(&req->link_list); *link = req; } else { @@ -2689,18 +2686,17 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, } for (i = 0; i < nr; i++) { - struct sqe_submit s; struct io_kiocb *req; req = io_get_req(ctx, statep); if (unlikely(!req)) break; - if (!io_get_sqring(ctx, &s)) { + if (!io_get_sqring(ctx, &req->submit)) { __io_free_req(req); break; } - if (io_sqe_needs_user(s.sqe) && !*mm) { + if (io_sqe_needs_user(req->submit.sqe) && !*mm) { mm_fault = mm_fault || !mmget_not_zero(ctx->sqo_mm); if (!mm_fault) { use_mm(ctx->sqo_mm); @@ -2708,7 +2704,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, } } - if (link && (s.sqe->flags & IOSQE_IO_DRAIN)) { + if (link && (req->submit.sqe->flags & IOSQE_IO_DRAIN)) { if (!shadow_req) { shadow_req = io_get_req(ctx, NULL); if (unlikely(!shadow_req)) @@ -2716,24 +2712,25 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, shadow_req->flags |= (REQ_F_IO_DRAIN | REQ_F_SHADOW_DRAIN); refcount_dec(&shadow_req->refs); } - shadow_req->sequence = s.sequence; + shadow_req->sequence = req->submit.sequence; } out: - s.ring_file = ring_file; - s.ring_fd = ring_fd; - s.has_user = *mm != NULL; - s.in_async = async; - s.needs_fixed_file = async; - trace_io_uring_submit_sqe(ctx, s.sqe->user_data, true, async); - io_submit_sqe(ctx, req, &s, statep, &link); + req->submit.ring_file = ring_file; + req->submit.ring_fd = ring_fd; + req->submit.has_user = *mm != NULL; + req->submit.in_async = async; + req->submit.needs_fixed_file = async; + trace_io_uring_submit_sqe(ctx, req->submit.sqe->user_data, + true, async); + io_submit_sqe(ctx, req, &req->submit, statep, &link); submitted++; /* * If previous wasn't linked and we have a linked command, * that's the end of the chain. Submit the previous link. */ - if (!(s.sqe->flags & IOSQE_IO_LINK) && link) { + if (!(req->submit.sqe->flags & IOSQE_IO_LINK) && link) { io_queue_link_head(ctx, link, &link->submit, shadow_req); link = NULL; shadow_req = NULL; From patchwork Tue Nov 5 23:04:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 11228929 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24F3216B1 for ; Tue, 5 Nov 2019 23:05:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D3C2721929 for ; Tue, 5 Nov 2019 23:05:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Q5jvNo0k" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730192AbfKEXFg (ORCPT ); Tue, 5 Nov 2019 18:05:36 -0500 Received: from mail-wm1-f65.google.com ([209.85.128.65]:53824 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730531AbfKEXFg (ORCPT ); Tue, 5 Nov 2019 18:05:36 -0500 Received: by mail-wm1-f65.google.com with SMTP id x4so1239585wmi.3; Tue, 05 Nov 2019 15:05:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Pi5p+/PyA3+76eBqLmjodT/hFSgkairOEN4cJPTZ1wY=; b=Q5jvNo0kHtQUvMTgpMpydZhoMf04/2kvnjtrjJhXv7fEWglA473vA+PWtWBryvN9Gk WrlkPIir7OuEzXgm394+sCN6vAOElfNbqonBmNAq6S3F8BvTdBh3DI3xfnjdFmEuQ+Fc 0BEFyWw1mgZg+RGr/uMnc2OeK1a5wBxL+yMs/f5fH3Cm+6TehJun0WCxv7Ez66oZiCSX fdn18YEAWLODyc6WtdtdeXysGRe9s3hFjQUO/AV/OfWSSGHip8yc3/x66gyuW22K9VgT txlAqDhz+qpVx6M2tWb/ANde1b8elrBT5LZT9ipSuQLuOvRry7O3Ft5ZNeDRXSrqUckT 9sEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pi5p+/PyA3+76eBqLmjodT/hFSgkairOEN4cJPTZ1wY=; b=kos+clIpeqwev0yJ2qAP886QVWUAKssIf2Rs0K50hnpQ8w0wOyRMbRKXyPiZVpJQ9O h4dt2dWvh3tGr7DyuDuLgyKqofSs8jrge5szi2vb1ccAkxWepCLQilW79BmA6sogosja NGbGM2VPzZlZm0J00ys/NaFvL+pMRXbSGCu8B7MLS5xNr73blnjPfplsobHRfrkh1Hv2 mMx5QN3NDeN8BwKBQNHWZ+iF3lYRflqt0tCE3wiy+tCVCTJbWirvP//mYLVJVudxSwEY fiCh4/BnFB7XoXOTsciq0HJYLGxXV2+HO6y1Svd+sQ8l3k1LBy5dJp4ZcXaCiYsuWbaX QQ9g== X-Gm-Message-State: APjAAAVHF6U+vzTxF6yFe0tbFgPrKvhcC+rB83/xOWnhdmp8SnuEs1cU I9lTTKviBGgN3wtsl4lTp80= X-Google-Smtp-Source: APXvYqx1qz8sWqw4yh1A6AJ6Egx/ZVkPArs6hlEDigKPOVbztsUE2Z3LhUII6QxvlDYgdtSOZzF2pw== X-Received: by 2002:a1c:c28a:: with SMTP id s132mr1148866wmf.162.1572995131162; Tue, 05 Nov 2019 15:05:31 -0800 (PST) Received: from localhost.localdomain ([109.126.129.81]) by smtp.gmail.com with ESMTPSA id x8sm15579658wrm.7.2019.11.05.15.05.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Nov 2019 15:05:30 -0800 (PST) From: Pavel Begunkov To: Jens Axboe , io-uring@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 3/3] io_uring: use inlined struct sqe_submit Date: Wed, 6 Nov 2019 02:04:45 +0300 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org req->submit is always up-to-date, use it directly Signed-off-by: Pavel Begunkov --- fs/io_uring.c | 85 +++++++++++++++++++++++++-------------------------- 1 file changed, 42 insertions(+), 43 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index ecb5a4336389..e40a6ed54adf 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1157,10 +1157,9 @@ static bool io_file_supports_async(struct file *file) return false; } -static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s, - bool force_nonblock) +static int io_prep_rw(struct io_kiocb *req, bool force_nonblock) { - const struct io_uring_sqe *sqe = s->sqe; + const struct io_uring_sqe *sqe = req->submit.sqe; struct io_ring_ctx *ctx = req->ctx; struct kiocb *kiocb = &req->rw; unsigned ioprio; @@ -1408,8 +1407,8 @@ static ssize_t loop_rw_iter(int rw, struct file *file, struct kiocb *kiocb, return ret; } -static int io_read(struct io_kiocb *req, const struct sqe_submit *s, - struct io_kiocb **nxt, bool force_nonblock) +static int io_read(struct io_kiocb *req, struct io_kiocb **nxt, + bool force_nonblock) { struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; struct kiocb *kiocb = &req->rw; @@ -1418,7 +1417,7 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s, size_t iov_count; ssize_t read_size, ret; - ret = io_prep_rw(req, s, force_nonblock); + ret = io_prep_rw(req, force_nonblock); if (ret) return ret; file = kiocb->ki_filp; @@ -1426,7 +1425,7 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s, if (unlikely(!(file->f_mode & FMODE_READ))) return -EBADF; - ret = io_import_iovec(req->ctx, READ, s, &iovec, &iter); + ret = io_import_iovec(req->ctx, READ, &req->submit, &iovec, &iter); if (ret < 0) return ret; @@ -1458,7 +1457,7 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s, ret2 = -EAGAIN; /* Catch -EAGAIN return for forced non-blocking submission */ if (!force_nonblock || ret2 != -EAGAIN) - kiocb_done(kiocb, ret2, nxt, s->in_async); + kiocb_done(kiocb, ret2, nxt, req->submit.in_async); else ret = -EAGAIN; } @@ -1466,8 +1465,8 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s, return ret; } -static int io_write(struct io_kiocb *req, const struct sqe_submit *s, - struct io_kiocb **nxt, bool force_nonblock) +static int io_write(struct io_kiocb *req, struct io_kiocb **nxt, + bool force_nonblock) { struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; struct kiocb *kiocb = &req->rw; @@ -1476,7 +1475,7 @@ static int io_write(struct io_kiocb *req, const struct sqe_submit *s, size_t iov_count; ssize_t ret; - ret = io_prep_rw(req, s, force_nonblock); + ret = io_prep_rw(req, force_nonblock); if (ret) return ret; @@ -1484,7 +1483,7 @@ static int io_write(struct io_kiocb *req, const struct sqe_submit *s, if (unlikely(!(file->f_mode & FMODE_WRITE))) return -EBADF; - ret = io_import_iovec(req->ctx, WRITE, s, &iovec, &iter); + ret = io_import_iovec(req->ctx, WRITE, &req->submit, &iovec, &iter); if (ret < 0) return ret; @@ -1521,7 +1520,7 @@ static int io_write(struct io_kiocb *req, const struct sqe_submit *s, else ret2 = loop_rw_iter(WRITE, file, kiocb, &iter); if (!force_nonblock || ret2 != -EAGAIN) - kiocb_done(kiocb, ret2, nxt, s->in_async); + kiocb_done(kiocb, ret2, nxt, req->submit.in_async); else ret = -EAGAIN; } @@ -2177,9 +2176,9 @@ static int io_async_cancel(struct io_kiocb *req, const struct io_uring_sqe *sqe, return 0; } -static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req, - const struct io_uring_sqe *sqe) +static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req) { + const struct io_uring_sqe *sqe = req->submit.sqe; struct io_uring_sqe *sqe_copy; if (!io_sequence_defer(ctx, req) && list_empty(&ctx->defer_list)) @@ -2206,10 +2205,10 @@ static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req, } static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, - const struct sqe_submit *s, struct io_kiocb **nxt, - bool force_nonblock) + struct io_kiocb **nxt, bool force_nonblock) { int ret, opcode; + struct sqe_submit *s = &req->submit; req->user_data = READ_ONCE(s->sqe->user_data); @@ -2221,18 +2220,18 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, case IORING_OP_READV: if (unlikely(s->sqe->buf_index)) return -EINVAL; - ret = io_read(req, s, nxt, force_nonblock); + ret = io_read(req, nxt, force_nonblock); break; case IORING_OP_WRITEV: if (unlikely(s->sqe->buf_index)) return -EINVAL; - ret = io_write(req, s, nxt, force_nonblock); + ret = io_write(req, nxt, force_nonblock); break; case IORING_OP_READ_FIXED: - ret = io_read(req, s, nxt, force_nonblock); + ret = io_read(req, nxt, force_nonblock); break; case IORING_OP_WRITE_FIXED: - ret = io_write(req, s, nxt, force_nonblock); + ret = io_write(req, nxt, force_nonblock); break; case IORING_OP_FSYNC: ret = io_fsync(req, s->sqe, nxt, force_nonblock); @@ -2307,7 +2306,7 @@ static void io_wq_submit_work(struct io_wq_work **workptr) s->has_user = (work->flags & IO_WQ_WORK_HAS_MM) != 0; s->in_async = true; do { - ret = __io_submit_sqe(ctx, req, s, &nxt, false); + ret = __io_submit_sqe(ctx, req, &nxt, false); /* * We can get EAGAIN for polled IO even though we're * forcing a sync submission from here, since we can't @@ -2359,9 +2358,10 @@ static inline struct file *io_file_from_index(struct io_ring_ctx *ctx, return table->files[index & IORING_FILE_TABLE_MASK]; } -static int io_req_set_file(struct io_ring_ctx *ctx, const struct sqe_submit *s, +static int io_req_set_file(struct io_ring_ctx *ctx, struct io_submit_state *state, struct io_kiocb *req) { + struct sqe_submit *s = &req->submit; unsigned flags; int fd; @@ -2425,12 +2425,11 @@ static int io_grab_files(struct io_ring_ctx *ctx, struct io_kiocb *req) return ret; } -static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, - struct sqe_submit *s) +static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) { int ret; - ret = __io_submit_sqe(ctx, req, s, NULL, true); + ret = __io_submit_sqe(ctx, req, NULL, true); /* * We async punt it if the file wasn't marked NOWAIT, or if the file @@ -2438,6 +2437,7 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, */ if (ret == -EAGAIN && (!(req->flags & REQ_F_NOWAIT) || (req->flags & REQ_F_MUST_PUNT))) { + struct sqe_submit *s = &req->submit; struct io_uring_sqe *sqe_copy; sqe_copy = kmemdup(s->sqe, sizeof(*sqe_copy), GFP_KERNEL); @@ -2475,31 +2475,30 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, return ret; } -static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, - struct sqe_submit *s) +static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) { int ret; - ret = io_req_defer(ctx, req, s->sqe); + ret = io_req_defer(ctx, req); if (ret) { if (ret != -EIOCBQUEUED) { io_free_req(req, NULL); - io_cqring_add_event(ctx, s->sqe->user_data, ret); + io_cqring_add_event(ctx, req->submit.sqe->user_data, ret); } return 0; } - return __io_queue_sqe(ctx, req, s); + return __io_queue_sqe(ctx, req); } static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, - struct sqe_submit *s, struct io_kiocb *shadow) + struct io_kiocb *shadow) { int ret; int need_submit = false; if (!shadow) - return io_queue_sqe(ctx, req, s); + return io_queue_sqe(ctx, req); /* * Mark the first IO in link list as DRAIN, let all the following @@ -2507,12 +2506,12 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, * list. */ req->flags |= REQ_F_IO_DRAIN; - ret = io_req_defer(ctx, req, s->sqe); + ret = io_req_defer(ctx, req); if (ret) { if (ret != -EIOCBQUEUED) { io_free_req(req, NULL); __io_free_req(shadow); - io_cqring_add_event(ctx, s->sqe->user_data, ret); + io_cqring_add_event(ctx, req->submit.sqe->user_data, ret); return 0; } } else { @@ -2530,7 +2529,7 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, spin_unlock_irq(&ctx->completion_lock); if (need_submit) - return __io_queue_sqe(ctx, req, s); + return __io_queue_sqe(ctx, req); return 0; } @@ -2538,10 +2537,10 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, #define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK) static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, - struct sqe_submit *s, struct io_submit_state *state, - struct io_kiocb **link) + struct io_submit_state *state, struct io_kiocb **link) { struct io_uring_sqe *sqe_copy; + struct sqe_submit *s = &req->submit; int ret; /* enforce forwards compatibility on users */ @@ -2550,7 +2549,7 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, goto err_req; } - ret = io_req_set_file(ctx, s, state, req); + ret = io_req_set_file(ctx, state, req); if (unlikely(ret)) { err_req: io_free_req(req, NULL); @@ -2585,7 +2584,7 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, INIT_LIST_HEAD(&req->link_list); *link = req; } else { - io_queue_sqe(ctx, req, s); + io_queue_sqe(ctx, req); } } @@ -2723,7 +2722,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, req->submit.needs_fixed_file = async; trace_io_uring_submit_sqe(ctx, req->submit.sqe->user_data, true, async); - io_submit_sqe(ctx, req, &req->submit, statep, &link); + io_submit_sqe(ctx, req, statep, &link); submitted++; /* @@ -2731,14 +2730,14 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, * that's the end of the chain. Submit the previous link. */ if (!(req->submit.sqe->flags & IOSQE_IO_LINK) && link) { - io_queue_link_head(ctx, link, &link->submit, shadow_req); + io_queue_link_head(ctx, link, shadow_req); link = NULL; shadow_req = NULL; } } if (link) - io_queue_link_head(ctx, link, &link->submit, shadow_req); + io_queue_link_head(ctx, link, shadow_req); if (statep) io_submit_state_end(&state);