From patchwork Mon Oct 31 13:41:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9322CFA3744 for ; Mon, 31 Oct 2022 13:41:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231278AbiJaNlu (ORCPT ); Mon, 31 Oct 2022 09:41:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbiJaNlt (ORCPT ); Mon, 31 Oct 2022 09:41:49 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EDB8101F8 for ; Mon, 31 Oct 2022 06:41:48 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFZhB018811 for ; Mon, 31 Oct 2022 06:41:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=bSy4dJny2RPucbubAwvu4900axwZa+ZbHsR4KWJN7lc=; b=l9JdGew+kllVLwlkaQRDBV/yyf64Bozyzj0LCBdG/byGiLnUxoLbb/nLukJYQHWO0nwf tDGDyjPTqlDJEWUUT8xB2kUy42ycGGS5OTSYi3jiWqK4Jh3OlMq+Jr3F4MnZ9KYBo6yb SwegQr35SeP28/59kvG+zJve0BRbt8qypX7F90qOp0tMAvtx5/29Tyr2stwYPJgCRP+q 1vrz8sL8GJXenUrpBuER5vwaOJVQRhoekrI0psYfP8TXJmDSiClJrUxvvYnXN8PsJBeh AT8GiO/d3TEx4WB6Jo6dtQsKxf0gFweaYzxvEmM470L4ScPbiWDnnQxx+KJ7jJausLpY Tg== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1vpwwh9-12 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:48 -0700 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:47 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id A3CDE8A19648; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 01/12] io_uring: infrastructure for retargeting rsrc nodes Date: Mon, 31 Oct 2022 06:41:15 -0700 Message-ID: <20221031134126.82928-2-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: UODCCKAtLr0g9UgiNoqgIT5xzuFo9pLK X-Proofpoint-ORIG-GUID: UODCCKAtLr0g9UgiNoqgIT5xzuFo9pLK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org rsrc node cleanup can be indefinitely delayed when there are long lived requests. For example if a file is located in the same rsrc node as a long lived socket with multishot poll, then even if unregistering the file it will not be closed while the poll request is still active. Introduce a timer when rsrc node is switched, so that periodically we can retarget these long lived requests to the newest nodes. That will allow the old nodes to be cleaned up, freeing resources. Signed-off-by: Dylan Yudaken --- include/linux/io_uring_types.h | 2 + io_uring/io_uring.c | 1 + io_uring/opdef.h | 1 + io_uring/rsrc.c | 92 ++++++++++++++++++++++++++++++++++ io_uring/rsrc.h | 1 + 5 files changed, 97 insertions(+) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index f5b687a787a3..1d4eff4e632c 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -327,6 +327,8 @@ struct io_ring_ctx { struct llist_head rsrc_put_llist; struct list_head rsrc_ref_list; spinlock_t rsrc_ref_lock; + struct delayed_work rsrc_retarget_work; + bool rsrc_retarget_scheduled; struct list_head io_buffers_pages; diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 6cc16e39b27f..ea2260359c56 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -320,6 +320,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) spin_lock_init(&ctx->rsrc_ref_lock); INIT_LIST_HEAD(&ctx->rsrc_ref_list); INIT_DELAYED_WORK(&ctx->rsrc_put_work, io_rsrc_put_work); + INIT_DELAYED_WORK(&ctx->rsrc_retarget_work, io_rsrc_retarget_work); init_llist_head(&ctx->rsrc_put_llist); init_llist_head(&ctx->work_llist); INIT_LIST_HEAD(&ctx->tctx_list); diff --git a/io_uring/opdef.h b/io_uring/opdef.h index 3efe06d25473..1b72b14cb5ab 100644 --- a/io_uring/opdef.h +++ b/io_uring/opdef.h @@ -37,6 +37,7 @@ struct io_op_def { int (*prep_async)(struct io_kiocb *); void (*cleanup)(struct io_kiocb *); void (*fail)(struct io_kiocb *); + bool (*can_retarget_rsrc)(struct io_kiocb *); }; extern const struct io_op_def io_op_defs[]; diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 55d4ab96fb92..106210e0d5d5 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -15,6 +15,7 @@ #include "io_uring.h" #include "openclose.h" #include "rsrc.h" +#include "opdef.h" struct io_rsrc_update { struct file *file; @@ -204,6 +205,95 @@ void io_rsrc_put_work(struct work_struct *work) } } + +static unsigned int io_rsrc_retarget_req(struct io_ring_ctx *ctx, + struct io_kiocb *req) + __must_hold(&ctx->uring_lock) +{ + if (!req->rsrc_node || + req->rsrc_node == ctx->rsrc_node) + return 0; + if (!io_op_defs[req->opcode].can_retarget_rsrc) + return 0; + if (!(*io_op_defs[req->opcode].can_retarget_rsrc)(req)) + return 0; + + io_rsrc_put_node(req->rsrc_node, 1); + req->rsrc_node = ctx->rsrc_node; + return 1; +} + +static unsigned int io_rsrc_retarget_table(struct io_ring_ctx *ctx, + struct io_hash_table *table) +{ + unsigned int nr_buckets = 1U << table->hash_bits; + unsigned int refs = 0; + struct io_kiocb *req; + int i; + + for (i = 0; i < nr_buckets; i++) { + struct io_hash_bucket *hb = &table->hbs[i]; + + spin_lock(&hb->lock); + hlist_for_each_entry(req, &hb->list, hash_node) + refs += io_rsrc_retarget_req(ctx, req); + spin_unlock(&hb->lock); + } + return refs; +} + +static void io_rsrc_retarget_schedule(struct io_ring_ctx *ctx) + __must_hold(&ctx->uring_lock) +{ + percpu_ref_get(&ctx->refs); + mod_delayed_work(system_wq, &ctx->rsrc_retarget_work, 60 * HZ); + ctx->rsrc_retarget_scheduled = true; +} + +static void __io_rsrc_retarget_work(struct io_ring_ctx *ctx) + __must_hold(&ctx->uring_lock) +{ + struct io_rsrc_node *node; + unsigned int refs; + bool any_waiting; + + if (!ctx->rsrc_node) + return; + + spin_lock_irq(&ctx->rsrc_ref_lock); + any_waiting = false; + list_for_each_entry(node, &ctx->rsrc_ref_list, node) { + if (!node->done) { + any_waiting = true; + break; + } + } + spin_unlock_irq(&ctx->rsrc_ref_lock); + + if (!any_waiting) + return; + + refs = io_rsrc_retarget_table(ctx, &ctx->cancel_table); + refs += io_rsrc_retarget_table(ctx, &ctx->cancel_table_locked); + + ctx->rsrc_cached_refs -= refs; + while (unlikely(ctx->rsrc_cached_refs < 0)) + io_rsrc_refs_refill(ctx); +} + +void io_rsrc_retarget_work(struct work_struct *work) +{ + struct io_ring_ctx *ctx; + + ctx = container_of(work, struct io_ring_ctx, rsrc_retarget_work.work); + + mutex_lock(&ctx->uring_lock); + ctx->rsrc_retarget_scheduled = false; + __io_rsrc_retarget_work(ctx); + mutex_unlock(&ctx->uring_lock); + percpu_ref_put(&ctx->refs); +} + void io_wait_rsrc_data(struct io_rsrc_data *data) { if (data && !atomic_dec_and_test(&data->refs)) @@ -285,6 +375,8 @@ void io_rsrc_node_switch(struct io_ring_ctx *ctx, atomic_inc(&data_to_kill->refs); percpu_ref_kill(&rsrc_node->refs); ctx->rsrc_node = NULL; + if (!ctx->rsrc_retarget_scheduled) + io_rsrc_retarget_schedule(ctx); } if (!ctx->rsrc_node) { diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 81445a477622..2b94df8fd9e8 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -54,6 +54,7 @@ struct io_mapped_ubuf { }; void io_rsrc_put_work(struct work_struct *work); +void io_rsrc_retarget_work(struct work_struct *work); void io_rsrc_refs_refill(struct io_ring_ctx *ctx); void io_wait_rsrc_data(struct io_rsrc_data *data); void io_rsrc_node_destroy(struct io_rsrc_node *ref_node); From patchwork Mon Oct 31 13:41:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA434ECAAA1 for ; Mon, 31 Oct 2022 13:41:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231263AbiJaNlq (ORCPT ); Mon, 31 Oct 2022 09:41:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230519AbiJaNlp (ORCPT ); Mon, 31 Oct 2022 09:41:45 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C9A9101F7 for ; Mon, 31 Oct 2022 06:41:44 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFR4G007556 for ; Mon, 31 Oct 2022 06:41:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=avQGXtOlKbtlo+TC5tXjdl4zYZlh9/nGEAy2UqxdhPo=; b=kjsXX3xYdtBOEUzQFNhwF2oU4UPZO6p6EHsdTkkdcu/+CobZB16gPLMG4s8uUWWiAPTN dETg5tK5YMpMwOJhz6ASSq3UxOXAkI4mOzECojVDYb2ucWNQY8BCM5fNhhtTQcf42nyV /OEpaR/0hMn9dEfsp5TOFfSYnJvM1bsiSR1nIMVU1jk6aUvNRk8IAicb80VWPofzN5aH mLjg7jK/ZP+rciQv6iMp8/4HLw7LHaDshnBUbx5DXcxJNFjn6zwSMnOYxtFn2cxbAym+ vG04FOUE2iPg186nWzTpP0WCfId8jIKrbuSgmvZlwW1ORvJcHnhu9Kak368VzLSv654S hw== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1x1xc91-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:43 -0700 Received: from twshared23862.08.ash9.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:42 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id AC65F8A1964A; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 02/12] io_uring: io-wq helper to iterate all work Date: Mon, 31 Oct 2022 06:41:16 -0700 Message-ID: <20221031134126.82928-3-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: GcV0SimPA_Xfr9CWV-WVuhbmy_5uS8Is X-Proofpoint-GUID: GcV0SimPA_Xfr9CWV-WVuhbmy_5uS8Is X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add a helper to iterate all work currently queued on an io-wq. Signed-off-by: Dylan Yudaken --- io_uring/io-wq.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++++ io_uring/io-wq.h | 3 +++ 2 files changed, 52 insertions(+) diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 6f1d0e5df23a..47cbe2df05c4 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -38,6 +38,11 @@ enum { IO_ACCT_STALLED_BIT = 0, /* stalled on hash */ }; +struct io_for_each_work_data { + work_for_each_fn *cb; + void *data; +}; + /* * One for each thread in a wqe pool */ @@ -856,6 +861,19 @@ static bool io_wq_for_each_worker(struct io_wqe *wqe, return ret; } +static bool io_wq_for_each_work_cb(struct io_worker *w, void *data) +{ + struct io_for_each_work_data *f = data; + + raw_spin_lock(&w->lock); + if (w->cur_work) + f->cb(w->cur_work, f->data); + if (w->next_work) + f->cb(w->next_work, f->data); + raw_spin_unlock(&w->lock); + return false; +} + static bool io_wq_worker_wake(struct io_worker *worker, void *data) { __set_notify_signal(worker->task); @@ -1113,6 +1131,37 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel, return IO_WQ_CANCEL_NOTFOUND; } +void io_wq_for_each(struct io_wq *wq, work_for_each_fn *cb, void *data) +{ + int node, i; + struct io_for_each_work_data wq_data = { + .cb = cb, + .data = data + }; + + for_each_node(node) { + struct io_wqe *wqe = wq->wqes[node]; + + for (i = 0; i < IO_WQ_ACCT_NR; i++) { + struct io_wqe_acct *acct = io_get_acct(wqe, i == 0); + struct io_wq_work_node *node, *prev; + struct io_wq_work *work; + + raw_spin_lock(&acct->lock); + wq_list_for_each(node, prev, &acct->work_list) { + work = container_of(node, struct io_wq_work, list); + cb(work, data); + } + raw_spin_unlock(&acct->lock); + } + + + raw_spin_lock(&wqe->lock); + io_wq_for_each_worker(wqe, io_wq_for_each_work_cb, &wq_data); + raw_spin_unlock(&wqe->lock); + } +} + static int io_wqe_hash_wake(struct wait_queue_entry *wait, unsigned mode, int sync, void *key) { diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h index 31228426d192..163cb12259b0 100644 --- a/io_uring/io-wq.h +++ b/io_uring/io-wq.h @@ -63,6 +63,9 @@ typedef bool (work_cancel_fn)(struct io_wq_work *, void *); enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel, void *data, bool cancel_all); +typedef void (work_for_each_fn)(struct io_wq_work *, void *); +void io_wq_for_each(struct io_wq *wq, work_for_each_fn *cb, void *data); + #if defined(CONFIG_IO_WQ) extern void io_wq_worker_sleeping(struct task_struct *); extern void io_wq_worker_running(struct task_struct *); From patchwork Mon Oct 31 13:41:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E5D7FA3745 for ; Mon, 31 Oct 2022 13:41:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231244AbiJaNlx (ORCPT ); Mon, 31 Oct 2022 09:41:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231289AbiJaNlu (ORCPT ); Mon, 31 Oct 2022 09:41:50 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3869101FC for ; Mon, 31 Oct 2022 06:41:49 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFEgX016861 for ; Mon, 31 Oct 2022 06:41:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=TqiMKxYrMMUqwwlhmlCqOkBB81H8vO0REoWfVDEvMWA=; b=Kjl2UphPxKZLN05bsex/aEVc8YKGTrAvN7GaMtXr65WWJg8qIq9dNpvvUwV5ji6aJfxW n1wIG/u5DAUPw9fnty7OJSfa0fkoJiapgON5hjvkHu2f4ziflbh98nD96S5TStvmSEeF dl4wAmAHqbYewd9g1CphDTBrGGMiq8d6BkLsX4zkXLIBPrYAa+Vq9PWmjwOvfLn3A+ul ysYzPeBbISrJHqBDLtgu0ZwyBPfZAszYI79WyJfU6wbJZg7t9jZBf9aoExcb3xk6U+zx WS5T7R8UeDgVmFt4TgTSJtrZPvfWDcM9gt7NTMRNTUwWoN/dFRNdiygwTfXY1FLG8z9A 0g== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh20vxbnf-13 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:48 -0700 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:47 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id B49508A1964C; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 03/12] io_uring: support retargeting rsrc on requests in the io-wq Date: Mon, 31 Oct 2022 06:41:17 -0700 Message-ID: <20221031134126.82928-4-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: Jb1XR4dv7t-D6mz03ERxvNr3Kx7o78R6 X-Proofpoint-ORIG-GUID: Jb1XR4dv7t-D6mz03ERxvNr3Kx7o78R6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Requests can be in flight on the io-wq, and can be long lived (for example a double read will get onto the io-wq). So make sure to retarget the rsrc nodes on those requests. Signed-off-by: Dylan Yudaken --- io_uring/rsrc.c | 46 ++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 106210e0d5d5..8d0d40713a63 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -16,6 +16,7 @@ #include "openclose.h" #include "rsrc.h" #include "opdef.h" +#include "tctx.h" struct io_rsrc_update { struct file *file; @@ -24,6 +25,11 @@ struct io_rsrc_update { u32 offset; }; +struct io_retarget_data { + struct io_ring_ctx *ctx; + unsigned int refs; +}; + static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov, struct io_mapped_ubuf **pimu, struct page **last_hpage); @@ -250,11 +256,42 @@ static void io_rsrc_retarget_schedule(struct io_ring_ctx *ctx) ctx->rsrc_retarget_scheduled = true; } +static void io_retarget_rsrc_wq_cb(struct io_wq_work *work, void *data) +{ + struct io_kiocb *req = container_of(work, struct io_kiocb, work); + struct io_retarget_data *rd = data; + + if (req->ctx != rd->ctx) + return; + + rd->refs += io_rsrc_retarget_req(rd->ctx, req); +} + +static void io_rsrc_retarget_wq(struct io_retarget_data *data) + __must_hold(&data->ctx->uring_lock) +{ + struct io_ring_ctx *ctx = data->ctx; + struct io_tctx_node *node; + + list_for_each_entry(node, &ctx->tctx_list, ctx_node) { + struct io_uring_task *tctx = node->task->io_uring; + + if (!tctx->io_wq) + continue; + + io_wq_for_each(tctx->io_wq, io_retarget_rsrc_wq_cb, data); + } +} + static void __io_rsrc_retarget_work(struct io_ring_ctx *ctx) __must_hold(&ctx->uring_lock) { struct io_rsrc_node *node; - unsigned int refs; + struct io_retarget_data data = { + .ctx = ctx, + .refs = 0 + }; + unsigned int poll_refs; bool any_waiting; if (!ctx->rsrc_node) @@ -273,10 +310,11 @@ static void __io_rsrc_retarget_work(struct io_ring_ctx *ctx) if (!any_waiting) return; - refs = io_rsrc_retarget_table(ctx, &ctx->cancel_table); - refs += io_rsrc_retarget_table(ctx, &ctx->cancel_table_locked); + poll_refs = io_rsrc_retarget_table(ctx, &ctx->cancel_table); + poll_refs += io_rsrc_retarget_table(ctx, &ctx->cancel_table_locked); + io_rsrc_retarget_wq(&data); - ctx->rsrc_cached_refs -= refs; + ctx->rsrc_cached_refs -= (poll_refs + data.refs); while (unlikely(ctx->rsrc_cached_refs < 0)) io_rsrc_refs_refill(ctx); } From patchwork Mon Oct 31 13:41:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A06C9ECAAA1 for ; Mon, 31 Oct 2022 13:41:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231231AbiJaNlw (ORCPT ); Mon, 31 Oct 2022 09:41:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231266AbiJaNlt (ORCPT ); Mon, 31 Oct 2022 09:41:49 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7441101F9 for ; Mon, 31 Oct 2022 06:41:48 -0700 (PDT) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFlun017676 for ; Mon, 31 Oct 2022 06:41:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=+NkB8T1Po+tB+NzCUUiTtsUN5Agdo5SQlSRuA8WLxo4=; b=nbusQJdXHHnLHzMCjW/cjNsLXkeCvTp6o8A8I5twY66xqkN2b5b3raaQCCTvLmziwfJJ asiwW47RDmJHgR3xYJTOR8m0Co7PTs9UeDeuakMl0aTURjHb0mtHsg3coY7vjvcs/yWV jVyqyCwJdLcpybSg1d94hQwDhyfPUCw4WtD6TQzcHSQzGE9QnASm5XSR/uDXHH1Gtrv+ dkEdI5LYSDBGyRAVLQZP4a3kaeUxtTdMS5CJYrPPs8LzguStuUI469XoiK8rECjZ8Su1 czAKXLDO+DevcFQ5Y6cvA+wxQOL1/GyXFsm6In5URuTrCwSNdG4bMFSNcyWcHHLzICXh yA== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kgygu6djh-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:48 -0700 Received: from snc-exhub201.TheFacebook.com (2620:10d:c085:21d::7) by snc-exhub204.TheFacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:47 -0700 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:47 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id BAA7D8A1964E; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 04/12] io_uring: reschedule retargeting at shutdown of ring Date: Mon, 31 Oct 2022 06:41:18 -0700 Message-ID: <20221031134126.82928-5-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: 0ubRqP8FvBiDEsTCUEac340z3VAMNUcJ X-Proofpoint-GUID: 0ubRqP8FvBiDEsTCUEac340z3VAMNUcJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org When the ring shuts down, instead of waiting for the work to release it's reference, just reschedule it to now and get the reference back that way. Signed-off-by: Dylan Yudaken --- io_uring/io_uring.c | 1 + io_uring/rsrc.c | 26 +++++++++++++++++++++----- io_uring/rsrc.h | 1 + 3 files changed, 23 insertions(+), 5 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index ea2260359c56..32eb305c4ce7 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2751,6 +2751,7 @@ static __cold void io_ring_exit_work(struct work_struct *work) } io_req_caches_free(ctx); + io_rsrc_retarget_exiting(ctx); if (WARN_ON_ONCE(time_after(jiffies, timeout))) { /* there is little hope left, don't run it too often */ diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 8d0d40713a63..40b37899e943 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -248,12 +248,20 @@ static unsigned int io_rsrc_retarget_table(struct io_ring_ctx *ctx, return refs; } -static void io_rsrc_retarget_schedule(struct io_ring_ctx *ctx) +static void io_rsrc_retarget_schedule(struct io_ring_ctx *ctx, bool delay) __must_hold(&ctx->uring_lock) { - percpu_ref_get(&ctx->refs); - mod_delayed_work(system_wq, &ctx->rsrc_retarget_work, 60 * HZ); - ctx->rsrc_retarget_scheduled = true; + unsigned long del; + + if (delay) + del = 60 * HZ; + else + del = 0; + + if (likely(!mod_delayed_work(system_wq, &ctx->rsrc_retarget_work, del))) { + percpu_ref_get(&ctx->refs); + ctx->rsrc_retarget_scheduled = true; + } } static void io_retarget_rsrc_wq_cb(struct io_wq_work *work, void *data) @@ -332,6 +340,14 @@ void io_rsrc_retarget_work(struct work_struct *work) percpu_ref_put(&ctx->refs); } +void io_rsrc_retarget_exiting(struct io_ring_ctx *ctx) +{ + mutex_lock(&ctx->uring_lock); + if (ctx->rsrc_retarget_scheduled) + io_rsrc_retarget_schedule(ctx, false); + mutex_unlock(&ctx->uring_lock); +} + void io_wait_rsrc_data(struct io_rsrc_data *data) { if (data && !atomic_dec_and_test(&data->refs)) @@ -414,7 +430,7 @@ void io_rsrc_node_switch(struct io_ring_ctx *ctx, percpu_ref_kill(&rsrc_node->refs); ctx->rsrc_node = NULL; if (!ctx->rsrc_retarget_scheduled) - io_rsrc_retarget_schedule(ctx); + io_rsrc_retarget_schedule(ctx, true); } if (!ctx->rsrc_node) { diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 2b94df8fd9e8..93c66475796e 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -55,6 +55,7 @@ struct io_mapped_ubuf { void io_rsrc_put_work(struct work_struct *work); void io_rsrc_retarget_work(struct work_struct *work); +void io_rsrc_retarget_exiting(struct io_ring_ctx *ctx); void io_rsrc_refs_refill(struct io_ring_ctx *ctx); void io_wait_rsrc_data(struct io_rsrc_data *data); void io_rsrc_node_destroy(struct io_rsrc_node *ref_node); From patchwork Mon Oct 31 13:41:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FA62FA3746 for ; Mon, 31 Oct 2022 13:41:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231266AbiJaNlw (ORCPT ); Mon, 31 Oct 2022 09:41:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231254AbiJaNlt (ORCPT ); Mon, 31 Oct 2022 09:41:49 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C41C8101FA for ; Mon, 31 Oct 2022 06:41:48 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFZhC018811 for ; Mon, 31 Oct 2022 06:41:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=U7L3ksaXouKD+oVaCFiQ0CZIndC7HTcK8vsAc5fbv34=; b=gevDVRfJR1+zZG5vkDeh4ON7TejlkHu96fOSH8yDOIe6HLoNpkBLJbhWscAhbBjgjP/I ORoTnnAzIYbvQRbPNMngakG9fJBgV4STZSCqvrqHqvmDQDQ3lEmAuJqONlzEUTyFWIap fk4zrgSAoDEHlBxo3Js9Z6CHJN9pWFesNC+ZeOcH7s0oAG+KhmtWnSH83BE0t9rbgdP3 6TeOhBHLjygoFYB8+52vwoWYwB2pVP8jTkdpf5N3XRllXOoQ1CH+kPDl4iwouBEmS/Q8 sFl8MhtC4ntx4XprN04PgpJbxxT3gl0DQ5bGb4Q3oljkGOw73DtopVPPaiBwXUbUfMNZ jw== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1vpwwh9-13 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:48 -0700 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:47 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id C16DA8A19650; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 05/12] io_uring: add tracing for io_uring_rsrc_retarget Date: Mon, 31 Oct 2022 06:41:19 -0700 Message-ID: <20221031134126.82928-6-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: vOKVqMwLxcWki8bEWpr85qKhM-JdUIx4 X-Proofpoint-ORIG-GUID: vOKVqMwLxcWki8bEWpr85qKhM-JdUIx4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add event tracing to show how many poll/wq requests were retargeted Signed-off-by: Dylan Yudaken --- include/trace/events/io_uring.h | 30 ++++++++++++++++++++++++++++++ io_uring/rsrc.c | 2 ++ 2 files changed, 32 insertions(+) diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h index 936fd41bf147..b47be89dd270 100644 --- a/include/trace/events/io_uring.h +++ b/include/trace/events/io_uring.h @@ -684,6 +684,36 @@ TRACE_EVENT(io_uring_local_work_run, TP_printk("ring %p, count %d, loops %u", __entry->ctx, __entry->count, __entry->loops) ); +/* + * io_uring_rsrc_retarget - ran a rsrc retarget + * + * @ctx: pointer to a io_uring_ctx + * @poll: how many retargeted that were polling + * @wq: how many retargeted that were in the wq + * + */ +TRACE_EVENT(io_uring_rsrc_retarget, + + TP_PROTO(void *ctx, unsigned int poll, unsigned int wq), + + TP_ARGS(ctx, poll, wq), + + TP_STRUCT__entry( + __field(void *, ctx) + __field(unsigned int, poll) + __field(unsigned int, wq) + ), + + TP_fast_assign( + __entry->ctx = ctx; + __entry->poll = poll; + __entry->wq = wq; + ), + + TP_printk("ring %p, poll %u, wq %u", + __entry->ctx, __entry->poll, __entry->wq) +); + #endif /* _TRACE_IO_URING_H */ /* This part must be outside protection */ diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 40b37899e943..00402533cee5 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -325,6 +325,8 @@ static void __io_rsrc_retarget_work(struct io_ring_ctx *ctx) ctx->rsrc_cached_refs -= (poll_refs + data.refs); while (unlikely(ctx->rsrc_cached_refs < 0)) io_rsrc_refs_refill(ctx); + + trace_io_uring_rsrc_retarget(ctx, poll_refs, data.refs); } void io_rsrc_retarget_work(struct work_struct *work) From patchwork Mon Oct 31 13:41:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6570DFA3744 for ; Mon, 31 Oct 2022 13:41:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231254AbiJaNly (ORCPT ); Mon, 31 Oct 2022 09:41:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231298AbiJaNlv (ORCPT ); Mon, 31 Oct 2022 09:41:51 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E11A101F8 for ; Mon, 31 Oct 2022 06:41:50 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFTRR018759 for ; Mon, 31 Oct 2022 06:41:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=Qn4wARsGh2Mg7RlnCWciVHBGx4+K0kHQUHRPbd2VLCM=; b=FTnWfvelVgBuzb5rmotyXeG41ESFvzR6E5ElafgLiJPcW54uDmWgjazBtTctLBdAQ7zR FU17FfHHgVUrIOVF9t7LNF3U/qj+vkCm+XaeODGvV8fhr5YP8QRi94m/yJhMdYLZRDB5 ZQKYzksEKNZyaQ23YEhPewSle+m4gsdYFeFzcEjRcHbMlTv7MIS7X1+u+t5jf8oANLk7 B8fbxJbVEuX85+rUnFKrlM7HaeWoieeKfENJBeeWKFh1w9NOz9OZr57VJH3sebPBOrb/ 9DyJh8ir520QcYe7Rgk8zwyOXgujeXZ5VA/kEzhf1J58UFXusxHUIie9sA2ZWJpvPya3 RQ== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1vpwwm8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:50 -0700 Received: from twshared5476.02.ash7.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:49 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id C9F018A19656; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 06/12] io_uring: add fixed file peeking function Date: Mon, 31 Oct 2022 06:41:20 -0700 Message-ID: <20221031134126.82928-7-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: tPod9XXRaYxNv_si4UQQgtGn5NMOPvTp X-Proofpoint-ORIG-GUID: tPod9XXRaYxNv_si4UQQgtGn5NMOPvTp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add a helper function to grab the fixed file at a given offset. Will be useful for retarget op handlers. Signed-off-by: Dylan Yudaken --- io_uring/io_uring.c | 26 ++++++++++++++++++++------ io_uring/io_uring.h | 1 + 2 files changed, 21 insertions(+), 6 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 32eb305c4ce7..a052653fc65e 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1841,6 +1841,23 @@ void io_wq_submit_work(struct io_wq_work *work) io_req_task_queue_fail(req, ret); } +static unsigned long __io_file_peek_fixed(struct io_kiocb *req, int fd) + __must_hold(&req->ctx->uring_lock) +{ + struct io_ring_ctx *ctx = req->ctx; + + if (unlikely((unsigned int)fd >= ctx->nr_user_files)) + return 0; + fd = array_index_nospec(fd, ctx->nr_user_files); + return io_fixed_file_slot(&ctx->file_table, fd)->file_ptr; +} + +struct file *io_file_peek_fixed(struct io_kiocb *req, int fd) + __must_hold(&req->ctx->uring_lock) +{ + return (struct file *) (__io_file_peek_fixed(req, fd) & FFS_MASK); +} + inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd, unsigned int issue_flags) { @@ -1849,17 +1866,14 @@ inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd, unsigned long file_ptr; io_ring_submit_lock(ctx, issue_flags); - - if (unlikely((unsigned int)fd >= ctx->nr_user_files)) - goto out; - fd = array_index_nospec(fd, ctx->nr_user_files); - file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr; + file_ptr = __io_file_peek_fixed(req, fd); file = (struct file *) (file_ptr & FFS_MASK); file_ptr &= ~FFS_MASK; /* mask in overlapping REQ_F and FFS bits */ req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT); io_req_set_rsrc_node(req, ctx, 0); -out: + WARN_ON_ONCE(file && !test_bit(fd, ctx->file_table.bitmap)); + io_ring_submit_unlock(ctx, issue_flags); return file; } diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index ef77d2aa3172..781471bfba12 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -44,6 +44,7 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages); struct file *io_file_get_normal(struct io_kiocb *req, int fd); struct file *io_file_get_fixed(struct io_kiocb *req, int fd, unsigned issue_flags); +struct file *io_file_peek_fixed(struct io_kiocb *req, int fd); static inline bool io_req_ffs_set(struct io_kiocb *req) { From patchwork Mon Oct 31 13:41:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A00FECAAA1 for ; Mon, 31 Oct 2022 13:41:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230519AbiJaNl4 (ORCPT ); Mon, 31 Oct 2022 09:41:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231283AbiJaNly (ORCPT ); Mon, 31 Oct 2022 09:41:54 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2ACB101F8 for ; Mon, 31 Oct 2022 06:41:53 -0700 (PDT) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDF6UW002501 for ; Mon, 31 Oct 2022 06:41:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=xGpRXxs6UtO1K5TkrTC+htpfAkBhM3I2QzU0IEJz/Ag=; b=IvyFGRS/BeoKSwmt4u4S75z3mwr6dc43yimy6gz3RDtWr8laLx9lTgpa09M8JyVxCsmD BndZ7g5EUa7SLQAE3FljYFOZjIzr0yfreBcB6NhACAjUnzaAt2F4tFE+3W6bGx7VmV0I CIDa/YjitibNQB+U5EEfnaNJzpzhO3vNXO5xWun7A9BieWsW+QgMMQB91hQqQhUZ489C ydwJMqeBMgEJzthldCNwlWfKIYUo9a/7T/Eu6O3ESacZsVfDWxcaGLQVUYYzf+cDlXeL N+cTH1T4bwQX1Uv8gi2VhsRmsrNyCpqxqKQg5P/p7MFBv4zMaR5axI0hicOGxHmBgzKO qw== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh19te02b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:53 -0700 Received: from twshared2001.03.ash8.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:52 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id D31E18A19658; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 07/12] io_uring: split send_zc specific struct out of io_sr_msg Date: Mon, 31 Oct 2022 06:41:21 -0700 Message-ID: <20221031134126.82928-8-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: tRxzuhAf_HcsQAGaucUpqSb7-frsLvdi X-Proofpoint-ORIG-GUID: tRxzuhAf_HcsQAGaucUpqSb7-frsLvdi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Split out the specific sendzc parts of struct io_sr_msg as other opcodes are going to be specialized. Signed-off-by: Dylan Yudaken --- io_uring/net.c | 77 +++++++++++++++++++++++++++----------------------- 1 file changed, 42 insertions(+), 35 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index 15dea91625e2..f4638e79a022 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -63,10 +63,14 @@ struct io_sr_msg { /* initialised and used only by !msg send variants */ u16 addr_len; void __user *addr; - /* used only for send zerocopy */ - struct io_kiocb *notif; }; +struct io_send_zc_msg { + struct io_sr_msg sr; + struct io_kiocb *notif; +}; + + #define IO_APOLL_MULTI_POLLED (REQ_F_APOLL_MULTISHOT | REQ_F_POLLED) int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) @@ -910,7 +914,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags) void io_send_zc_cleanup(struct io_kiocb *req) { - struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg); + struct io_send_zc_msg *zc = io_kiocb_to_cmd(req, struct io_send_zc_msg); struct io_async_msghdr *io; if (req_has_async_data(req)) { @@ -927,8 +931,9 @@ void io_send_zc_cleanup(struct io_kiocb *req) int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { - struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg); + struct io_send_zc_msg *zc = io_kiocb_to_cmd(req, struct io_send_zc_msg); struct io_ring_ctx *ctx = req->ctx; + struct io_sr_msg *sr = &zc->sr; struct io_kiocb *notif; if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3))) @@ -937,8 +942,8 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (req->flags & REQ_F_CQE_SKIP) return -EINVAL; - zc->flags = READ_ONCE(sqe->ioprio); - if (zc->flags & ~(IORING_RECVSEND_POLL_FIRST | + sr->flags = READ_ONCE(sqe->ioprio); + if (sr->flags & ~(IORING_RECVSEND_POLL_FIRST | IORING_RECVSEND_FIXED_BUF)) return -EINVAL; notif = zc->notif = io_alloc_notif(ctx); @@ -948,7 +953,7 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) notif->cqe.res = 0; notif->cqe.flags = IORING_CQE_F_NOTIF; req->flags |= REQ_F_NEED_CLEANUP; - if (zc->flags & IORING_RECVSEND_FIXED_BUF) { + if (sr->flags & IORING_RECVSEND_FIXED_BUF) { unsigned idx = READ_ONCE(sqe->buf_index); if (unlikely(idx >= ctx->nr_user_bufs)) @@ -961,26 +966,26 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (req->opcode == IORING_OP_SEND_ZC) { if (READ_ONCE(sqe->__pad3[0])) return -EINVAL; - zc->addr = u64_to_user_ptr(READ_ONCE(sqe->addr2)); - zc->addr_len = READ_ONCE(sqe->addr_len); + sr->addr = u64_to_user_ptr(READ_ONCE(sqe->addr2)); + sr->addr_len = READ_ONCE(sqe->addr_len); } else { if (unlikely(sqe->addr2 || sqe->file_index)) return -EINVAL; - if (unlikely(zc->flags & IORING_RECVSEND_FIXED_BUF)) + if (unlikely(sr->flags & IORING_RECVSEND_FIXED_BUF)) return -EINVAL; } - zc->buf = u64_to_user_ptr(READ_ONCE(sqe->addr)); - zc->len = READ_ONCE(sqe->len); - zc->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL; - if (zc->msg_flags & MSG_DONTWAIT) + sr->buf = u64_to_user_ptr(READ_ONCE(sqe->addr)); + sr->len = READ_ONCE(sqe->len); + sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL; + if (sr->msg_flags & MSG_DONTWAIT) req->flags |= REQ_F_NOWAIT; - zc->done_io = 0; + sr->done_io = 0; #ifdef CONFIG_COMPAT if (req->ctx->compat) - zc->msg_flags |= MSG_CMSG_COMPAT; + sr->msg_flags |= MSG_CMSG_COMPAT; #endif return 0; } @@ -1046,7 +1051,8 @@ static int io_sg_from_iter(struct sock *sk, struct sk_buff *skb, int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) { struct sockaddr_storage __address; - struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg); + struct io_send_zc_msg *zc = io_kiocb_to_cmd(req, struct io_send_zc_msg); + struct io_sr_msg *sr = &zc->sr; struct msghdr msg; struct iovec iov; struct socket *sock; @@ -1064,42 +1070,42 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) msg.msg_controllen = 0; msg.msg_namelen = 0; - if (zc->addr) { + if (sr->addr) { if (req_has_async_data(req)) { struct io_async_msghdr *io = req->async_data; msg.msg_name = &io->addr; } else { - ret = move_addr_to_kernel(zc->addr, zc->addr_len, &__address); + ret = move_addr_to_kernel(sr->addr, sr->addr_len, &__address); if (unlikely(ret < 0)) return ret; msg.msg_name = (struct sockaddr *)&__address; } - msg.msg_namelen = zc->addr_len; + msg.msg_namelen = sr->addr_len; } if (!(req->flags & REQ_F_POLLED) && - (zc->flags & IORING_RECVSEND_POLL_FIRST)) + (sr->flags & IORING_RECVSEND_POLL_FIRST)) return io_setup_async_addr(req, &__address, issue_flags); - if (zc->flags & IORING_RECVSEND_FIXED_BUF) { + if (sr->flags & IORING_RECVSEND_FIXED_BUF) { ret = io_import_fixed(WRITE, &msg.msg_iter, req->imu, - (u64)(uintptr_t)zc->buf, zc->len); + (u64)(uintptr_t)sr->buf, sr->len); if (unlikely(ret)) return ret; msg.sg_from_iter = io_sg_from_iter; } else { - ret = import_single_range(WRITE, zc->buf, zc->len, &iov, + ret = import_single_range(WRITE, sr->buf, sr->len, &iov, &msg.msg_iter); if (unlikely(ret)) return ret; - ret = io_notif_account_mem(zc->notif, zc->len); + ret = io_notif_account_mem(zc->notif, sr->len); if (unlikely(ret)) return ret; msg.sg_from_iter = io_sg_from_iter_iovec; } - msg_flags = zc->msg_flags | MSG_ZEROCOPY; + msg_flags = sr->msg_flags | MSG_ZEROCOPY; if (issue_flags & IO_URING_F_NONBLOCK) msg_flags |= MSG_DONTWAIT; if (msg_flags & MSG_WAITALL) @@ -1114,9 +1120,9 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) return io_setup_async_addr(req, &__address, issue_flags); if (ret > 0 && io_net_retry(sock, msg.msg_flags)) { - zc->len -= ret; - zc->buf += ret; - zc->done_io += ret; + sr->len -= ret; + sr->buf += ret; + sr->done_io += ret; req->flags |= REQ_F_PARTIAL_IO; return io_setup_async_addr(req, &__address, issue_flags); } @@ -1126,9 +1132,9 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) } if (ret >= 0) - ret += zc->done_io; - else if (zc->done_io) - ret = zc->done_io; + ret += sr->done_io; + else if (sr->done_io) + ret = sr->done_io; /* * If we're in io-wq we can't rely on tw ordering guarantees, defer @@ -1144,8 +1150,9 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags) { - struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); + struct io_send_zc_msg *zc = io_kiocb_to_cmd(req, struct io_send_zc_msg); struct io_async_msghdr iomsg, *kmsg; + struct io_sr_msg *sr = &zc->sr; struct socket *sock; unsigned flags; int ret, min_ret = 0; @@ -1175,7 +1182,7 @@ int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags) if (flags & MSG_WAITALL) min_ret = iov_iter_count(&kmsg->msg.msg_iter); - kmsg->msg.msg_ubuf = &io_notif_to_data(sr->notif)->uarg; + kmsg->msg.msg_ubuf = &io_notif_to_data(zc->notif)->uarg; kmsg->msg.sg_from_iter = io_sg_from_iter_iovec; ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags); @@ -1209,7 +1216,7 @@ int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags) * flushing notif to io_send_zc_cleanup() */ if (!(issue_flags & IO_URING_F_UNLOCKED)) { - io_notif_flush(sr->notif); + io_notif_flush(zc->notif); req->flags &= ~REQ_F_NEED_CLEANUP; } io_req_set_res(req, ret, IORING_CQE_F_MORE); From patchwork Mon Oct 31 13:41:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D97FA3745 for ; Mon, 31 Oct 2022 13:41:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231283AbiJaNl5 (ORCPT ); Mon, 31 Oct 2022 09:41:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231267AbiJaNl4 (ORCPT ); Mon, 31 Oct 2022 09:41:56 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B52EF101F8 for ; Mon, 31 Oct 2022 06:41:55 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFP6L005880 for ; Mon, 31 Oct 2022 06:41:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=5UAK8/r+oYf2G6j07qTXSAPgzd10hopmcm9ZNkR5gs8=; b=XgljDSJMesW1btodXNg+MMO7dCwfJjRJMfTcRCcfN0tnQrTEOoI2E6BHGgK+UkEBK0wp sKNNG6t62mD7DG9jvfqDsYEBxZynive/fEliiSlxXvB8VtQCi3KYVOYTF1vDCvznFxHs +f4PQkQIqNE9pGRYUhFZp7f5lswOA4gbI6oSCqSfz3wMB/c14iz/LeXgj4JDHD6sKSdD KY6SZMGfPANnDZws7jU7vgPqWplbKevp9ymPfTtnWUbfoDqP8/f8Tetv9Uu2O5JNqPRa PVFLWvx/ruwYvAxsfyAHI967ioIYs+XqBFHDvVkUy3YbtplNbxc4uUag7h+KCwCCYBA0 qw== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1x1xcb1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:54 -0700 Received: from twshared2001.03.ash8.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:52 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id D8A3B8A1965A; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 08/12] io_uring: recv/recvmsg retarget_rsrc support Date: Mon, 31 Oct 2022 06:41:22 -0700 Message-ID: <20221031134126.82928-9-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: 7UN_yBR63euU9i9LNSM2PX7sV_lFVpZV X-Proofpoint-GUID: 7UN_yBR63euU9i9LNSM2PX7sV_lFVpZV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add can_retarget_rsrc handler for recv/recvmsg Signed-off-by: Dylan Yudaken --- io_uring/net.c | 22 +++++++++++++++++++++- io_uring/net.h | 1 + io_uring/opdef.c | 2 ++ 3 files changed, 24 insertions(+), 1 deletion(-) diff --git a/io_uring/net.c b/io_uring/net.c index f4638e79a022..0fa05ef52dd3 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -70,6 +70,11 @@ struct io_send_zc_msg { struct io_kiocb *notif; }; +struct io_recv_msg { + struct io_sr_msg sr; + int retarget_fd; +}; + #define IO_APOLL_MULTI_POLLED (REQ_F_APOLL_MULTISHOT | REQ_F_POLLED) @@ -547,7 +552,8 @@ int io_recvmsg_prep_async(struct io_kiocb *req) int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { - struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); + struct io_recv_msg *rcv = io_kiocb_to_cmd(req, struct io_recv_msg); + struct io_sr_msg *sr = &rcv->sr; if (unlikely(sqe->file_index || sqe->addr2)) return -EINVAL; @@ -572,6 +578,11 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) req->flags |= REQ_F_APOLL_MULTISHOT; } + if (req->flags & REQ_F_FIXED_FILE) + rcv->retarget_fd = req->cqe.fd; + else + rcv->retarget_fd = -1; + #ifdef CONFIG_COMPAT if (req->ctx->compat) sr->msg_flags |= MSG_CMSG_COMPAT; @@ -709,6 +720,15 @@ static int io_recvmsg_multishot(struct socket *sock, struct io_sr_msg *io, kmsg->controllen + err; } +bool io_recv_can_retarget_rsrc(struct io_kiocb *req) +{ + struct io_recv_msg *rcv = io_kiocb_to_cmd(req, struct io_recv_msg); + + if (rcv->retarget_fd < 0) + return false; + return io_file_peek_fixed(req, rcv->retarget_fd) == req->file; +} + int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags) { struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); diff --git a/io_uring/net.h b/io_uring/net.h index 5ffa11bf5d2e..6b5719084494 100644 --- a/io_uring/net.h +++ b/io_uring/net.h @@ -43,6 +43,7 @@ int io_recvmsg_prep_async(struct io_kiocb *req); int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags); int io_recv(struct io_kiocb *req, unsigned int issue_flags); +bool io_recv_can_retarget_rsrc(struct io_kiocb *req); void io_sendrecv_fail(struct io_kiocb *req); diff --git a/io_uring/opdef.c b/io_uring/opdef.c index 83dc0f9ad3b2..1a0be5681c7b 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -178,6 +178,7 @@ const struct io_op_def io_op_defs[] = { .prep_async = io_recvmsg_prep_async, .cleanup = io_sendmsg_recvmsg_cleanup, .fail = io_sendrecv_fail, + .can_retarget_rsrc = io_recv_can_retarget_rsrc, #else .prep = io_eopnotsupp_prep, #endif @@ -340,6 +341,7 @@ const struct io_op_def io_op_defs[] = { .prep = io_recvmsg_prep, .issue = io_recv, .fail = io_sendrecv_fail, + .can_retarget_rsrc = io_recv_can_retarget_rsrc, #else .prep = io_eopnotsupp_prep, #endif From patchwork Mon Oct 31 13:41:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1724FA3744 for ; Mon, 31 Oct 2022 13:42:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231326AbiJaNl7 (ORCPT ); Mon, 31 Oct 2022 09:41:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231307AbiJaNl6 (ORCPT ); Mon, 31 Oct 2022 09:41:58 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FBEB101F3 for ; Mon, 31 Oct 2022 06:41:56 -0700 (PDT) Received: from pps.filterd (m0109333.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFakU007975 for ; Mon, 31 Oct 2022 06:41:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=kCnz8uQ/hWag4I8ws1bxcXGmJng6+FeiSra6gGWm1+E=; b=L/PL/Wcdh2dCBoAmXSDU2Hw9D705tySeVDCv86Sm+m3C8m2uAoqgsK7ayKWlc/FqkVSu aZzBu/jdIJ5osmqqMYH/eZ4Lw7r2Xm6XEuAjDHcUfCx3QnY3O35FYgt+NNLMTV3lwVKW tYw53heDKzmXOoyfsJaZDj02hj/eQTzV+1GcjjJiNVp9m0sRzS0BxQ34AI+6V+ceqnAT lhrgmX6jgMn18dnFCzop1GmljKGewxGqCw3IGpCLfNnDaD3bQR5d3wAjfitQTfWKTK9d 2+L6/GAAj5+V9zbSWGDEWYeKnHNsQHhyn2OARuRBSd3jyIeSv0gjwDwgTdLCR7h8y4Vh lg== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh07p697e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:55 -0700 Received: from twshared14438.02.ash8.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:55 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id E3BDD8A1965C; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 09/12] io_uring: accept retarget_rsrc support Date: Mon, 31 Oct 2022 06:41:23 -0700 Message-ID: <20221031134126.82928-10-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: af97CRxj55TuegxBFZBu6YP-4yR7vEKm X-Proofpoint-ORIG-GUID: af97CRxj55TuegxBFZBu6YP-4yR7vEKm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add can_retarget_rsrc handler for accept Signed-off-by: Dylan Yudaken --- io_uring/net.c | 15 +++++++++++++++ io_uring/net.h | 1 + io_uring/opdef.c | 1 + 3 files changed, 17 insertions(+) diff --git a/io_uring/net.c b/io_uring/net.c index 0fa05ef52dd3..429176f3d191 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -30,6 +30,7 @@ struct io_accept { int flags; u32 file_slot; unsigned long nofile; + int retarget_fd; }; struct io_socket { @@ -1255,6 +1256,15 @@ void io_sendrecv_fail(struct io_kiocb *req) req->cqe.flags |= IORING_CQE_F_MORE; } +bool io_accept_can_retarget_rsrc(struct io_kiocb *req) +{ + struct io_accept *accept = io_kiocb_to_cmd(req, struct io_accept); + + if (accept->retarget_fd < 0) + return false; + return io_file_peek_fixed(req, accept->retarget_fd) == req->file; +} + int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_accept *accept = io_kiocb_to_cmd(req, struct io_accept); @@ -1285,6 +1295,11 @@ int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) accept->flags = (accept->flags & ~SOCK_NONBLOCK) | O_NONBLOCK; if (flags & IORING_ACCEPT_MULTISHOT) req->flags |= REQ_F_APOLL_MULTISHOT; + + if (req->flags & REQ_F_FIXED_FILE) + accept->retarget_fd = req->cqe.fd; + else + accept->retarget_fd = -1; return 0; } diff --git a/io_uring/net.h b/io_uring/net.h index 6b5719084494..67fafb94d7de 100644 --- a/io_uring/net.h +++ b/io_uring/net.h @@ -49,6 +49,7 @@ void io_sendrecv_fail(struct io_kiocb *req); int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); int io_accept(struct io_kiocb *req, unsigned int issue_flags); +bool io_accept_can_retarget_rsrc(struct io_kiocb *req); int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); int io_socket(struct io_kiocb *req, unsigned int issue_flags); diff --git a/io_uring/opdef.c b/io_uring/opdef.c index 1a0be5681c7b..7c94f1a4315a 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -207,6 +207,7 @@ const struct io_op_def io_op_defs[] = { #if defined(CONFIG_NET) .prep = io_accept_prep, .issue = io_accept, + .can_retarget_rsrc = io_accept_can_retarget_rsrc, #else .prep = io_eopnotsupp_prep, #endif From patchwork Mon Oct 31 13:41:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E60C6FA3744 for ; Mon, 31 Oct 2022 13:42:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231305AbiJaNmC (ORCPT ); Mon, 31 Oct 2022 09:42:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231339AbiJaNmB (ORCPT ); Mon, 31 Oct 2022 09:42:01 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4ABDD10546 for ; Mon, 31 Oct 2022 06:42:00 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFZps018814 for ; Mon, 31 Oct 2022 06:42:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=E1UE+ZnF0mN9rIV+saxMrAyTQOyM78gziti+cvX/DRw=; b=RCFCN22TywnMLijtreE+zDiz52eOjBPXpK9+CtDmFqgYH6mz58QuPrWo4zEYK+qBKeAD 0Ly8o5mHFBMX8Kw9aqtnI9vzGcVwXMsCk5I6ap6EzLk+Nr2t7Df6rqOFXFymXTXgX/Qs W0wMpL5LQ1y+dhlf/IOerTkX6IZEdkt6W/jlW7KCJmk75qxhOYWpYlvmRRqPKVkEu0ho TboXLNEv5a+1yA+9WeJiQPS0+rlNjfI3kFvFg23nn0pIEK+jdFDnIhjRqovRlZLdTv+b fqpMIopV1YKqmxUFdy2RwlsLuny9K0QPXufhBbUGaHFId+iqEKGomKHshpONm45+6szz Fg== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1vpwwp2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:41:59 -0700 Received: from twshared6758.06.ash9.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:41:59 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id E9F4F8A19660; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 10/12] io_uring: read retarget_rsrc support Date: Mon, 31 Oct 2022 06:41:24 -0700 Message-ID: <20221031134126.82928-11-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: NX7Zf1TSW4lWC6gK9cA9O2bkyNxPe_gn X-Proofpoint-ORIG-GUID: NX7Zf1TSW4lWC6gK9cA9O2bkyNxPe_gn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add can_retarget_rsrc handler for read Signed-off-by: Dylan Yudaken --- io_uring/opdef.c | 2 ++ io_uring/rw.c | 14 ++++++++++++++ io_uring/rw.h | 1 + 3 files changed, 17 insertions(+) diff --git a/io_uring/opdef.c b/io_uring/opdef.c index 7c94f1a4315a..0018fe39cbb5 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -70,6 +70,7 @@ const struct io_op_def io_op_defs[] = { .prep_async = io_readv_prep_async, .cleanup = io_readv_writev_cleanup, .fail = io_rw_fail, + .can_retarget_rsrc = io_read_can_retarget_rsrc, }, [IORING_OP_WRITEV] = { .needs_file = 1, @@ -284,6 +285,7 @@ const struct io_op_def io_op_defs[] = { .prep = io_prep_rw, .issue = io_read, .fail = io_rw_fail, + .can_retarget_rsrc = io_read_can_retarget_rsrc, }, [IORING_OP_WRITE] = { .needs_file = 1, diff --git a/io_uring/rw.c b/io_uring/rw.c index bb47cc4da713..7618e402dcec 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -1068,3 +1068,17 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) io_free_batch_list(ctx, pos); return nr_events; } + +bool io_read_can_retarget_rsrc(struct io_kiocb *req) +{ + struct file *f; + + if (!(req->flags & REQ_F_FIXED_FILE)) + return true; + + f = io_file_peek_fixed(req, req->cqe.fd); + if (f != req->file) + return false; + + return true; +} diff --git a/io_uring/rw.h b/io_uring/rw.h index 3b733f4b610a..715e7249463b 100644 --- a/io_uring/rw.h +++ b/io_uring/rw.h @@ -22,3 +22,4 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags); int io_writev_prep_async(struct io_kiocb *req); void io_readv_writev_cleanup(struct io_kiocb *req); void io_rw_fail(struct io_kiocb *req); +bool io_read_can_retarget_rsrc(struct io_kiocb *req); From patchwork Mon Oct 31 13:41:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAC87ECAAA1 for ; Mon, 31 Oct 2022 13:42:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231252AbiJaNmG (ORCPT ); Mon, 31 Oct 2022 09:42:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbiJaNmG (ORCPT ); Mon, 31 Oct 2022 09:42:06 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A39EB101F7 for ; Mon, 31 Oct 2022 06:42:05 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFZUA018830 for ; Mon, 31 Oct 2022 06:42:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=rVpqmimRUfCaYzvg/AuDtgITqsk42kV7nBaekHqu8To=; b=lnDB3t1CXWY91XgaTRqnogzl3cYX1I2UTI+0g84PaDeERhlkndKY1bzBml6rZnc5U35s ua2VAVaWEvZarMXh/XLuzTMwakr7veHyy6oScxjjYR4+ZqXt4JVtOq+ZaBu2A1QsuVsQ 5+dhhf82YaK8oDCDl8VGNi2l9SbW9xEvvPHPcFOl2usiMr7dMvJw+9bsBzfosO6AiRzV G62y4GSMHMp6dmtuGtC3+4tmlXs/e17g1aKy0oxbS2tA+FD2sfCydbpW3VmvxjFjevMZ 2Aovfa3+RrfE+is7Yf7O2/Nq/y2mmtGYl3KM1IGrahgzjcX8fIIc6OtFNDvBSz80Rumr fA== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1vpwwqa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:42:05 -0700 Received: from twshared2001.03.ash8.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:42:04 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id F08F08A19662; Mon, 31 Oct 2022 06:41:35 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 11/12] io_uring: read_fixed retarget_rsrc support Date: Mon, 31 Oct 2022 06:41:25 -0700 Message-ID: <20221031134126.82928-12-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: pggfCkxQb9r4L-ANEnPAs8tMpj-elT3h X-Proofpoint-ORIG-GUID: pggfCkxQb9r4L-ANEnPAs8tMpj-elT3h X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add can_retarget_rsrc handler for read_fixed Signed-off-by: Dylan Yudaken --- io_uring/opdef.c | 1 + io_uring/rw.c | 15 +++++++++++++++ io_uring/rw.h | 1 + 3 files changed, 17 insertions(+) diff --git a/io_uring/opdef.c b/io_uring/opdef.c index 0018fe39cbb5..5159b3abc2b2 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -109,6 +109,7 @@ const struct io_op_def io_op_defs[] = { .prep = io_prep_rw, .issue = io_read, .fail = io_rw_fail, + .can_retarget_rsrc = io_read_fixed_can_retarget_rsrc, }, [IORING_OP_WRITE_FIXED] = { .needs_file = 1, diff --git a/io_uring/rw.c b/io_uring/rw.c index 7618e402dcec..d82fbe074bd9 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -1082,3 +1082,18 @@ bool io_read_can_retarget_rsrc(struct io_kiocb *req) return true; } + +bool io_read_fixed_can_retarget_rsrc(struct io_kiocb *req) +{ + struct io_ring_ctx *ctx = req->ctx; + u16 index; + + if (unlikely(req->buf_index >= ctx->nr_user_bufs)) + return false; + + index = array_index_nospec(req->buf_index, ctx->nr_user_bufs); + if (ctx->user_bufs[index] != req->imu) + return false; + + return io_read_can_retarget_rsrc(req); +} diff --git a/io_uring/rw.h b/io_uring/rw.h index 715e7249463b..69cbc36560f6 100644 --- a/io_uring/rw.h +++ b/io_uring/rw.h @@ -23,3 +23,4 @@ int io_writev_prep_async(struct io_kiocb *req); void io_readv_writev_cleanup(struct io_kiocb *req); void io_rw_fail(struct io_kiocb *req); bool io_read_can_retarget_rsrc(struct io_kiocb *req); +bool io_read_fixed_can_retarget_rsrc(struct io_kiocb *req); From patchwork Mon Oct 31 13:41:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dylan Yudaken X-Patchwork-Id: 13025943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D16FFA3745 for ; Mon, 31 Oct 2022 13:42:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231211AbiJaNmI (ORCPT ); Mon, 31 Oct 2022 09:42:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbiJaNmH (ORCPT ); Mon, 31 Oct 2022 09:42:07 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D28F4101FD for ; Mon, 31 Oct 2022 06:42:06 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29VDFQsD007133 for ; Mon, 31 Oct 2022 06:42:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=Yf3Oe7nNm4mTSc3Ipcgjg8uc6Rp9QEcyG6HSDoB+xZ4=; b=EKCOfaSaFDOw74Oktjbuvt+1B3GImoF+0Pfdv2cyjnPw429BZ+ClRwFoyWfRyOhCoLbk srwnnDHvwHwVxhzhSvXK2Z7WAKjhfn1wumR5w4AUJSIdrhbpVSlgB4tLouGw6xhuBiK6 VfZtnpKX9PvKcaPRVutZmpOQ7Hn7s8qgdKF5t2gopF6CiCyzarCq8aWMtOTwY+c3AFs/ Hkm8yuwghA5Nr5gRqkzFk8sMX4XECa/Hy6UyAF5RgyFsJ0/3U80IHQa1p58tbI+QYCCj Q6jaKBRXV78rgtnvmAH8N9UrMoT6Xxak2/zzS09Px7R97t4maN2Iro0SSSb+g0GioxdW 4Q== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kh1x1xcd2-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 31 Oct 2022 06:42:05 -0700 Received: from twshared23862.08.ash9.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 06:42:04 -0700 Received: by devbig038.lla2.facebook.com (Postfix, from userid 572232) id 038E48A19664; Mon, 31 Oct 2022 06:41:36 -0700 (PDT) From: Dylan Yudaken To: Jens Axboe , Pavel Begunkov CC: , , Dylan Yudaken Subject: [PATCH for-next 12/12] io_uring: poll_add retarget_rsrc support Date: Mon, 31 Oct 2022 06:41:26 -0700 Message-ID: <20221031134126.82928-13-dylany@meta.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221031134126.82928-1-dylany@meta.com> References: <20221031134126.82928-1-dylany@meta.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: ZsbdbKjEvdcRSEIRfN8njtPqINm1f7AF X-Proofpoint-GUID: ZsbdbKjEvdcRSEIRfN8njtPqINm1f7AF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-31_15,2022-10-31_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add can_retarget_rsrc handler for poll. Note that the copy of fd is stashed in the middle of the struct io_poll as there is a hole there, and this is the only way to ensure that the structure does not grow beyond the size of struct io_cmd_data. Signed-off-by: Dylan Yudaken --- io_uring/opdef.c | 1 + io_uring/poll.c | 12 ++++++++++++ io_uring/poll.h | 2 ++ 3 files changed, 15 insertions(+) diff --git a/io_uring/opdef.c b/io_uring/opdef.c index 5159b3abc2b2..952ea8ff5032 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -133,6 +133,7 @@ const struct io_op_def io_op_defs[] = { .name = "POLL_ADD", .prep = io_poll_add_prep, .issue = io_poll_add, + .can_retarget_rsrc = io_poll_can_retarget_rsrc, }, [IORING_OP_POLL_REMOVE] = { .audit_skip = 1, diff --git a/io_uring/poll.c b/io_uring/poll.c index 0d9f49c575e0..fde8060b9399 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -863,6 +863,7 @@ int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return -EINVAL; poll->events = io_poll_parse_events(sqe, flags); + poll->fd = req->cqe.fd; return 0; } @@ -963,3 +964,14 @@ void io_apoll_cache_free(struct io_cache_entry *entry) { kfree(container_of(entry, struct async_poll, cache)); } + +bool io_poll_can_retarget_rsrc(struct io_kiocb *req) +{ + struct io_poll *poll = io_kiocb_to_cmd(req, struct io_poll); + + if (req->flags & REQ_F_FIXED_FILE && + io_file_peek_fixed(req, poll->fd) != req->file) + return false; + + return true; +} diff --git a/io_uring/poll.h b/io_uring/poll.h index 5f3bae50fc81..dcc4b06bcea1 100644 --- a/io_uring/poll.h +++ b/io_uring/poll.h @@ -12,6 +12,7 @@ struct io_poll { struct file *file; struct wait_queue_head *head; __poll_t events; + int fd; /* only used by poll_add */ struct wait_queue_entry wait; }; @@ -37,3 +38,4 @@ bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk, bool cancel_all); void io_apoll_cache_free(struct io_cache_entry *entry); +bool io_poll_can_retarget_rsrc(struct io_kiocb *req);