From patchwork Tue Mar 14 17:16:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13174797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64282C7618B for ; Tue, 14 Mar 2023 17:17:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230146AbjCNRR0 (ORCPT ); Tue, 14 Mar 2023 13:17:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229656AbjCNRRY (ORCPT ); Tue, 14 Mar 2023 13:17:24 -0400 Received: from mail-il1-x133.google.com (mail-il1-x133.google.com [IPv6:2607:f8b0:4864:20::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34EEFACE03 for ; Tue, 14 Mar 2023 10:17:06 -0700 (PDT) Received: by mail-il1-x133.google.com with SMTP id l9so5889305iln.1 for ; Tue, 14 Mar 2023 10:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; t=1678814225; x=1681406225; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+qwZGpQsRUQQpglMl71LgfSE4H69vr4wHZy+TpuOCFA=; b=zMaJk3+DVZns+NBGtgLoeWmXuDSTZwD7h7bSfRbfvjy0OIW8YX27cniVkEuGmNX9Ge lHSsOvANhWi9aAmstcgOz/KpproTMSijIWmhVLr12Huf0dmSC0XRP2c6qLt27+VtNIZU kx2tbfINOHIpFBuKGUkP7JDDQpDIjMcS1c5LJAvCp5BwpQtf8TtvwG2AFUQPbe+CVK2D 3Vh4zkSKEN9PVOaRwykt2Rl0RdNhu8gyXoYI7cSPY4qgzRcpT1wMuJhg4ZdrSEjVMipQ LDbePcMfNRpk7Z4NTuw54URTsfR2A4QyX7WuxLSV3RcLan9XuDO4yF23tmyZwvgiCF/J 33EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678814225; x=1681406225; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+qwZGpQsRUQQpglMl71LgfSE4H69vr4wHZy+TpuOCFA=; b=Qw/NDUuh6tf44r026YuuGgntc9LQn6QB52zIpn2pd2OaFDddpZsVwFRa9Bf5L3y5kQ B+PNNHa6VirRcG8ruTCcmDqcXwj9IL58DGwJdrOEzZRraWF1DSxtJNrHGw9PD2Xb7Dvq 1lBpyx8CJW0y4MqcgDkceXr8e8HOaQxKEQqmF6OzoYFEhlkIfl2TgnMIVf8jwBB3vyes 6RGz8hM+l9l8En80k6Laa4vAXO4UlJY1TKR81+5GDiHRI4t37Y2wxh4edRCvNvVhJGHl sVApTaZMEP/Smvdo58K40D0VtlHyJ4vy+amcAU3IvymKLukDMBw2vWUbhh2qEE1HBF/Q 1mqw== X-Gm-Message-State: AO0yUKUJ1/hW1/wJSq5Itpuup+g1u0PIZqesHu8G0GNDQ5dYygcp6Txv cSftgap7Rl/TYkTXb9MiFKHuAg4mkqWTafuTjCEcdA== X-Google-Smtp-Source: AK7set/YiwOJLppmyM/dwU1LYGWLO9YymDJCWqgxEWlTJwBACHPtOYawGfDaPDyhinUfAfT3FbfZbw== X-Received: by 2002:a92:d64c:0:b0:319:5431:5d5b with SMTP id x12-20020a92d64c000000b0031954315d5bmr8598080ilp.1.1678814225074; Tue, 14 Mar 2023 10:17:05 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id o12-20020a056e02068c00b003179b81610csm948950ils.17.2023.03.14.10.17.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Mar 2023 10:17:04 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: deller@gmx.de, Jens Axboe Subject: [PATCH 1/5] io_uring: Adjust mapping wrt architecture aliasing requirements Date: Tue, 14 Mar 2023 11:16:38 -0600 Message-Id: <20230314171641.10542-2-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230314171641.10542-1-axboe@kernel.dk> References: <20230314171641.10542-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org From: Helge Deller Some architectures have memory cache aliasing requirements (e.g. parisc) if memory is shared between userspace and kernel. This patch fixes the kernel to return an aliased address when asked by userspace via mmap(). Signed-off-by: Helge Deller Signed-off-by: Jens Axboe Reported-by: matoro Signed-off-by: Helge Deller --- io_uring/io_uring.c | 51 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 722624b6d0dc..3adecebbac71 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -72,6 +72,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -3317,6 +3318,54 @@ static __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma) return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot); } +static unsigned long io_uring_mmu_get_unmapped_area(struct file *filp, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags) +{ + const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); + struct vm_unmapped_area_info info; + void *ptr; + + /* + * Do not allow to map to user-provided address to avoid breaking the + * aliasing rules. Userspace is not able to guess the offset address of + * kernel kmalloc()ed memory area. + */ + if (addr) + return -EINVAL; + + ptr = io_uring_validate_mmap_request(filp, pgoff, len); + if (IS_ERR(ptr)) + return -ENOMEM; + + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.length = len; + info.low_limit = max(PAGE_SIZE, mmap_min_addr); + info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base); +#ifdef SHM_COLOUR + info.align_mask = PAGE_MASK & (SHM_COLOUR - 1UL); +#else + info.align_mask = PAGE_MASK & (SHMLBA - 1UL); +#endif + info.align_offset = (unsigned long) ptr; + + /* + * A failed mmap() very likely causes application failure, + * so fall back to the bottom-up function here. This scenario + * can happen with large stack limits and large mmap() + * allocations. + */ + addr = vm_unmapped_area(&info); + if (offset_in_page(addr)) { + info.flags = 0; + info.low_limit = TASK_UNMAPPED_BASE; + info.high_limit = mmap_end; + addr = vm_unmapped_area(&info); + } + + return addr; +} + #else /* !CONFIG_MMU */ static int io_uring_mmap(struct file *file, struct vm_area_struct *vma) @@ -3529,6 +3578,8 @@ static const struct file_operations io_uring_fops = { #ifndef CONFIG_MMU .get_unmapped_area = io_uring_nommu_get_unmapped_area, .mmap_capabilities = io_uring_nommu_mmap_capabilities, +#else + .get_unmapped_area = io_uring_mmu_get_unmapped_area, #endif .poll = io_uring_poll, #ifdef CONFIG_PROC_FS From patchwork Tue Mar 14 17:16:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13174796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE4C5C6FD1F for ; Tue, 14 Mar 2023 17:17:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229656AbjCNRR1 (ORCPT ); Tue, 14 Mar 2023 13:17:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230186AbjCNRR0 (ORCPT ); Tue, 14 Mar 2023 13:17:26 -0400 Received: from mail-il1-x12e.google.com (mail-il1-x12e.google.com [IPv6:2607:f8b0:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40E9AA4B09 for ; Tue, 14 Mar 2023 10:17:07 -0700 (PDT) Received: by mail-il1-x12e.google.com with SMTP id j6so4530433ilr.7 for ; Tue, 14 Mar 2023 10:17:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; t=1678814226; x=1681406226; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8b3Wq6Txe1YruSA3HHjIHWRJfY4HdWbf9Sp/tXnbhks=; b=CA1UGgnmq6QYZ7QCUCN2ZH2MN4mxCLtX0qm2lcEnYe8QL1wE0xRSlKfAIroi8dooSf XOGTPqseCq9Em5WeuBBMiSxTsr3x2HBtGHNHqeoj3zt3YOHtGkRJoTIl1in+P5WPpk5Z 6FA6rJq83JStDDSSWDvFpyzcY+j+7yl33mLMg6QV0Righl1SAwOkQ/Ji+rTfJ6KCZHlA GCgQYuI7ljFyEF6KWi5DJ34tEKPUi5IFI2RuNkW3dzzqpEiUkOAjRr6WIBgcRgbEpf3+ uim6veHdulhq+XZc9BL9FaWC/EupEKy+tqhBfwyQPQ18dHHKaRXabJ7ncUpA+ufh1EwD jcRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678814226; x=1681406226; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8b3Wq6Txe1YruSA3HHjIHWRJfY4HdWbf9Sp/tXnbhks=; b=oL8ElwQ1SLwHky5E932YJicnPL9c9gCcmldf4wjqSf3dGH2Pw3giRaETUWoC1IA34e ba5bNviAxDiKuHUBgBwh3Mj4hMJK0InSZWi/cVyRG4Iq3jfzgyPNxOVQjOuDCbVzYDF0 VOqZt4P2A4MzE9PPFirdFT04rBs6+gFC2kguFkYSb1BMHhUD3FmkzzTZUplBlHPiqg7Q cxawWiKx+FFUcxhMGLmTWT06odzcGcTGT0lsK/fo5D/tZnXI6/PR3zVVUV2fiUAhX4uy BNyFAtuKyDA+0BfsTzDr/COamvfmUUpOTv6CaRXESyadDBQFggEhapZ18cGfKKtn2vfN EJmQ== X-Gm-Message-State: AO0yUKVtWrio19J+GqtMtuH5lNCq/bTbQll3dZxZh4TzSqMt4HKd9Lj5 mZc9dLt78dyKtRzsGowByyGpo+WFEM7wWJtvCf5OoQ== X-Google-Smtp-Source: AK7set8zKtnCVZPMWQDWhohtW9UoO8fsp5tk3VKbnLxInT4wPm6ximciB7KNLQgWG+8LW8X4RYhK+w== X-Received: by 2002:a92:b11:0:b0:317:94ad:a724 with SMTP id b17-20020a920b11000000b0031794ada724mr306547ilf.2.1678814226192; Tue, 14 Mar 2023 10:17:06 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id o12-20020a056e02068c00b003179b81610csm948950ils.17.2023.03.14.10.17.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Mar 2023 10:17:05 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: deller@gmx.de, Jens Axboe Subject: [PATCH 2/5] io_uring/kbuf: move pinning of provided buffer ring into helper Date: Tue, 14 Mar 2023 11:16:39 -0600 Message-Id: <20230314171641.10542-3-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230314171641.10542-1-axboe@kernel.dk> References: <20230314171641.10542-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org In preparation for allowing the kernel to allocate the provided buffer rings and have the application mmap it instead, abstract out the current method of pinning and mapping the user allocated ring. No functional changes intended in this patch. Signed-off-by: Jens Axboe --- io_uring/kbuf.c | 37 +++++++++++++++++++++++++------------ 1 file changed, 25 insertions(+), 12 deletions(-) diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 3002dc827195..3adc08f90e41 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -463,14 +463,32 @@ int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags) return IOU_OK; } -int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) +static int io_pin_pbuf_ring(struct io_uring_buf_reg *reg, + struct io_buffer_list *bl) { struct io_uring_buf_ring *br; - struct io_uring_buf_reg reg; - struct io_buffer_list *bl, *free_bl = NULL; struct page **pages; int nr_pages; + pages = io_pin_pages(reg->ring_addr, + flex_array_size(br, bufs, reg->ring_entries), + &nr_pages); + if (IS_ERR(pages)) + return PTR_ERR(pages); + + br = page_address(pages[0]); + bl->buf_pages = pages; + bl->buf_nr_pages = nr_pages; + bl->buf_ring = br; + return 0; +} + +int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) +{ + struct io_uring_buf_reg reg; + struct io_buffer_list *bl, *free_bl = NULL; + int ret; + if (copy_from_user(®, arg, sizeof(reg))) return -EFAULT; @@ -504,20 +522,15 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) return -ENOMEM; } - pages = io_pin_pages(reg.ring_addr, - flex_array_size(br, bufs, reg.ring_entries), - &nr_pages); - if (IS_ERR(pages)) { + ret = io_pin_pbuf_ring(®, bl); + if (ret) { kfree(free_bl); - return PTR_ERR(pages); + return ret; } - br = page_address(pages[0]); - bl->buf_pages = pages; - bl->buf_nr_pages = nr_pages; bl->nr_entries = reg.ring_entries; - bl->buf_ring = br; bl->mask = reg.ring_entries - 1; + io_buffer_add_list(ctx, bl, reg.bgid); return 0; } From patchwork Tue Mar 14 17:16:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13174798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D844C6FD1F for ; Tue, 14 Mar 2023 17:17:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230348AbjCNRRa (ORCPT ); Tue, 14 Mar 2023 13:17:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230186AbjCNRR2 (ORCPT ); Tue, 14 Mar 2023 13:17:28 -0400 Received: from mail-il1-x136.google.com (mail-il1-x136.google.com [IPv6:2607:f8b0:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F489B1B38 for ; Tue, 14 Mar 2023 10:17:08 -0700 (PDT) Received: by mail-il1-x136.google.com with SMTP id h5so4957525ile.13 for ; Tue, 14 Mar 2023 10:17:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; t=1678814227; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aGSYOM59UIr0TC/H6luGZyzBbwkeaeSpqlzrTARpmW0=; b=wGupW5nCeouf8RxFS7vjGxWrqwjy+7BUHNbQejh/zmGKQReyy3smWjItdNMogN3zhQ Qx+3SV4+ipihO+E0dlVCt6VJw3sHCpJtd16mic7FoWY8v772YRf4uUew0Yw8tlCGGvPd o7LdAXubDwODWIqq5mtkZz6iCDIyaVT4IjFv/TnNroovygBjn74MaZjHF0ZBe9uUOwt3 4a1ux7mvyRhIFQLKAkWZcEqbqzpEjjeHkVNZUDo9Zy/CxYWzZLiV68ooilZA0OrOQ+H+ 2YMRe6wFK3MrhuxqQDsuENqhhc1IBKO5wcc/cYQ7pN6D0g5lR6n/dzvaaAZvP51tA8Vu P3FQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678814227; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aGSYOM59UIr0TC/H6luGZyzBbwkeaeSpqlzrTARpmW0=; b=7ERVlmnfFWiUVPRp8qPYf1adaC3oHkE9pW5gZJ+ZVZGeFE7ocreMq4j0GhkA5FHxaP PkChbMOogVLF0OKKmud4joZKBqhNVA7YwJ0XgZ7pQsJl2QKDwFDL5zT1GmdzYZE7IbaH sTeDYUhtpDNcSj4Lv0gqV+YkTAKb5N81L24un1hQTQZ6TlxdNH0Yuf4CAC0dT9C4JtAi TbxAUhcpCI5AmgWwlQShflOwOgdeNUFjw5Fpc/clRRBsbHfRnB8g6ik+taH+0d1tOMxI R05HINZvz2imGhHUBJqvq8/jbWBi3vt9nMkAw+Gp/OZE8EBhNgF1mgO8YhhEBYvR2Xqm MJ4g== X-Gm-Message-State: AO0yUKXpDDP+gWVmNdPYG8+FODyStFe05duF0RrBdtws83rGjTDzhd2v 5d6xDwEmkRRpoU6tGvGQB2tl3k/BDlDbGcKUp9JB3A== X-Google-Smtp-Source: AK7set+1HnapotWvjI4yUJo0CiCylbQDp+FFwriokQv22BNQ+5f3HYbicAwoKB58roBSgpzAyr/uhA== X-Received: by 2002:a05:6e02:1d16:b0:316:ef1e:5e1f with SMTP id i22-20020a056e021d1600b00316ef1e5e1fmr10774947ila.1.1678814227150; Tue, 14 Mar 2023 10:17:07 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id o12-20020a056e02068c00b003179b81610csm948950ils.17.2023.03.14.10.17.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Mar 2023 10:17:06 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: deller@gmx.de, Jens Axboe Subject: [PATCH 3/5] io_uring/kbuf: add buffer_list->is_mapped member Date: Tue, 14 Mar 2023 11:16:40 -0600 Message-Id: <20230314171641.10542-4-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230314171641.10542-1-axboe@kernel.dk> References: <20230314171641.10542-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Rather than rely on checking buffer_list->buf_pages or ->buf_nr_pages, add a separate member that tracks if this is a ring mapped provided buffer list or not. Signed-off-by: Jens Axboe --- io_uring/kbuf.c | 14 ++++++++------ io_uring/kbuf.h | 3 +++ 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 3adc08f90e41..db5f189267b7 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -179,7 +179,7 @@ void __user *io_buffer_select(struct io_kiocb *req, size_t *len, bl = io_buffer_get_list(ctx, req->buf_index); if (likely(bl)) { - if (bl->buf_nr_pages) + if (bl->is_mapped) ret = io_ring_buffer_select(req, len, bl, issue_flags); else ret = io_provided_buffer_select(req, len, bl); @@ -214,7 +214,7 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, if (!nbufs) return 0; - if (bl->buf_nr_pages) { + if (bl->is_mapped && bl->buf_nr_pages) { int j; i = bl->buf_ring->tail - bl->head; @@ -225,6 +225,7 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, bl->buf_nr_pages = 0; /* make sure it's seen as empty */ INIT_LIST_HEAD(&bl->buf_list); + bl->is_mapped = 0; return i; } @@ -303,7 +304,7 @@ int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags) if (bl) { ret = -EINVAL; /* can't use provide/remove buffers command on mapped buffers */ - if (!bl->buf_nr_pages) + if (!bl->is_mapped) ret = __io_remove_buffers(ctx, bl, p->nbufs); } io_ring_submit_unlock(ctx, issue_flags); @@ -448,7 +449,7 @@ int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags) } } /* can't add buffers via this command for a mapped buffer ring */ - if (bl->buf_nr_pages) { + if (bl->is_mapped) { ret = -EINVAL; goto err; } @@ -480,6 +481,7 @@ static int io_pin_pbuf_ring(struct io_uring_buf_reg *reg, bl->buf_pages = pages; bl->buf_nr_pages = nr_pages; bl->buf_ring = br; + bl->is_mapped = 1; return 0; } @@ -514,7 +516,7 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) bl = io_buffer_get_list(ctx, reg.bgid); if (bl) { /* if mapped buffer ring OR classic exists, don't allow */ - if (bl->buf_nr_pages || !list_empty(&bl->buf_list)) + if (bl->is_mapped || !list_empty(&bl->buf_list)) return -EEXIST; } else { free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL); @@ -548,7 +550,7 @@ int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) bl = io_buffer_get_list(ctx, reg.bgid); if (!bl) return -ENOENT; - if (!bl->buf_nr_pages) + if (!bl->is_mapped) return -EINVAL; __io_remove_buffers(ctx, bl, -1U); diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index c23e15d7d3ca..61b9c7dade9d 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -23,6 +23,9 @@ struct io_buffer_list { __u16 nr_entries; __u16 head; __u16 mask; + + /* ring mapped provided buffers */ + __u8 is_mapped; }; struct io_buffer { From patchwork Tue Mar 14 17:16:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13174799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B563C05027 for ; Tue, 14 Mar 2023 17:17:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230186AbjCNRRc (ORCPT ); Tue, 14 Mar 2023 13:17:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230272AbjCNRRa (ORCPT ); Tue, 14 Mar 2023 13:17:30 -0400 Received: from mail-il1-x132.google.com (mail-il1-x132.google.com [IPv6:2607:f8b0:4864:20::132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A61FABAFF for ; Tue, 14 Mar 2023 10:17:09 -0700 (PDT) Received: by mail-il1-x132.google.com with SMTP id bp11so3589732ilb.3 for ; Tue, 14 Mar 2023 10:17:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; t=1678814228; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W9y1AOW+wzzyKm4hOaf7aPymFqkoUpTPYFIIeeYcziY=; b=RVpN5cpEH+PxOjzyqQ8D13Ph+D6dTyOaVQFleYFPnYaRK2Myh8WdajoUIRqoiYj/eF 7E1DZA1AL0zoecxp0W1qczlgwC27U3vuCOp8GWHCFNa8PFbURM49mYTq3k+XYHMaRBxV rRVSV/7p+iWBi+0WXGE3PBjF/akHGmdFUU0/1OmrgpBUQjmA5LPk/g0JsWN2rCL7eI1f 83ou7vNtZf30mn3Gn4MbVdv4MnheFsnGjqsZZiztQuAJO4JwknRYPo+JJPoRW2SIUHkK xJ6I4rJcyRoNp1u7dVN43hTAxhSwR1okZchBC6OPIHVuNSBPBlg4QlcdnMMRvCm07VEs bM2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678814228; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W9y1AOW+wzzyKm4hOaf7aPymFqkoUpTPYFIIeeYcziY=; b=HgPZjAk2ju6xyhkKqqbeMKyLQ6SG1hOp2HGLB7hkvOB2F2+C2BzRSXyPE+6Z6/d9BN TFIHmxpiHSYdhT+X6fR3gWrMMqFwqYzAxbCafUGAK8JbseCSN5GLNiujAfmGxvTxDMks JoIxzNBWJaZTW66iryK1OIO6ccrWiy5KBjLc3X6pUasDoaZ3JnWrbUC1YqnwE9JZsCxu 1YlOHsdqK4GCxOHneSIZdHXfTWxXnfF0xZF+ddhtoxwn9SFN1EODipZxe7/Drqw4ruK7 VYk2Iz/7X58VfRpkqQJJ/eppmJxAJD78gCb9Rq2lQin14SpQzF7bthzAJPC86l21fVXf gIPg== X-Gm-Message-State: AO0yUKVnGXUM9ZwQSHRNCKV/ZwgET99894Q6pmzWa/1dpt3n3WyGA+oG 3Cf+fXx07suw4vKoCLAAaQ1zUxufK/vtPBwEyvny1Q== X-Google-Smtp-Source: AK7set9am81GP+KSG0DF/oeQiPpNldOJdrN+vxl+I1TatGokDo6V0TjE6G8N/SlKOLhSaB7FR1+VvA== X-Received: by 2002:a05:6e02:5cb:b0:31f:9b6e:2f52 with SMTP id l11-20020a056e0205cb00b0031f9b6e2f52mr8376653ils.0.1678814228049; Tue, 14 Mar 2023 10:17:08 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id o12-20020a056e02068c00b003179b81610csm948950ils.17.2023.03.14.10.17.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Mar 2023 10:17:07 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: deller@gmx.de, Jens Axboe Subject: [PATCH 4/5] io_uring/kbuf: rename struct io_uring_buf_reg 'pad' to'flags' Date: Tue, 14 Mar 2023 11:16:41 -0600 Message-Id: <20230314171641.10542-5-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230314171641.10542-1-axboe@kernel.dk> References: <20230314171641.10542-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org In preparation for allowing flags to be set for registration, rename the padding and use it for that. Signed-off-by: Jens Axboe --- include/uapi/linux/io_uring.h | 2 +- io_uring/kbuf.c | 8 ++++++-- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 709de6d4feb2..c3f3ea997f3a 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -640,7 +640,7 @@ struct io_uring_buf_reg { __u64 ring_addr; __u32 ring_entries; __u16 bgid; - __u16 pad; + __u16 flags; __u64 resv[3]; }; diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index db5f189267b7..4b2f4a0ee962 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -494,7 +494,9 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) if (copy_from_user(®, arg, sizeof(reg))) return -EFAULT; - if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2]) + if (reg.resv[0] || reg.resv[1] || reg.resv[2]) + return -EINVAL; + if (reg.flags) return -EINVAL; if (!reg.ring_addr) return -EFAULT; @@ -544,7 +546,9 @@ int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) if (copy_from_user(®, arg, sizeof(reg))) return -EFAULT; - if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2]) + if (reg.resv[0] || reg.resv[1] || reg.resv[2]) + return -EINVAL; + if (reg.flags) return -EINVAL; bl = io_buffer_get_list(ctx, reg.bgid); From patchwork Tue Mar 14 17:16:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13174800 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43D84C6FD1F for ; Tue, 14 Mar 2023 17:17:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229793AbjCNRRc (ORCPT ); Tue, 14 Mar 2023 13:17:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229835AbjCNRRb (ORCPT ); Tue, 14 Mar 2023 13:17:31 -0400 Received: from mail-il1-x136.google.com (mail-il1-x136.google.com [IPv6:2607:f8b0:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D97CAE10D for ; Tue, 14 Mar 2023 10:17:10 -0700 (PDT) Received: by mail-il1-x136.google.com with SMTP id i19so8973450ila.10 for ; Tue, 14 Mar 2023 10:17:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; t=1678814229; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UJ0PlvpHYVvLi0iT8vRNCY/MoXBOga4VImL2haXj/2E=; b=VZBzB9Ak2pXaE022TbG3v390TDWdntSrYUpVxGz8egNEf6HnuwBbEXu9CAV8lQbvGM yDEYKHz4yCN5UjYK2k88TrM/RnvUFkqSXXl2v/R3c17njr9Hm5b2sOlg2vE7LomtmHvl QeqoOgqEPXMgJD1s+zTH6EVXG+k8gBnF+zrUKw3pCWEOLx+waG3Tw20W8P1zl6EMu171 2onRWY9d0lDpJUnWe04trWs2sZKHMsx4dG1KskTPBav54EW6tbp57EA8p6E0Am2556Y8 v4kZykOt05A//pfDa+oqptBK3RQ+hRtr6PbGnH02jMT0YWxlkGBr6krhh0E3TmvgCYnx 6nfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678814229; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UJ0PlvpHYVvLi0iT8vRNCY/MoXBOga4VImL2haXj/2E=; b=YyTE1ixDzxuIhegE45fYEhPoZUpqmG/NXPIGeXHnD+WoDqPYoi02vmZ6PZVLzveKWL WydkCwrKenPt7MmjNOMMt8tS9t1KS8swP1Ek3DujvrNVmGG/2SGUP7XzScSOpUc5kEmn T8buitaSdUuCyKqgK+M2pPNPPeC9dgV8elNaoPlltYp2i5x8Ndly8Ey7eZegyrUNTqAR SwAPGxT7/XQKWO97/B/8Vp4cpKMzM2Tni+5iWkAghsu1ROPxwfIxGBv+qGOvWeQs1Ds6 slJOCkU8j4rQeMfS5JXz4gSM5uRVSTgWR3UnDKBBkMTuBMeBVkPqRwnUkU6Qq4wyBTUy 49DA== X-Gm-Message-State: AO0yUKV1z3h20b/xtMSl+l1Cg/K/S8w5bQJNMdcE7rTyXbwLWa4hFjX4 XtBKqmYXtYeDDr+qXdaPf+9YxnZMrryIRUdhqLbzKw== X-Google-Smtp-Source: AK7set8BoC26E40sQskAfa2RaQS2cwfiyZcF1ipj5cpatuUrEG8+/vnBg3RBNYN4D6LnhD2765KlSg== X-Received: by 2002:a92:7406:0:b0:322:fad5:5d8f with SMTP id p6-20020a927406000000b00322fad55d8fmr6227757ilc.2.1678814229030; Tue, 14 Mar 2023 10:17:09 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id o12-20020a056e02068c00b003179b81610csm948950ils.17.2023.03.14.10.17.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Mar 2023 10:17:08 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: deller@gmx.de, Jens Axboe Subject: [PATCH 5/5] io_uring: add support for user mapped provided buffer ring Date: Tue, 14 Mar 2023 11:16:42 -0600 Message-Id: <20230314171641.10542-6-axboe@kernel.dk> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230314171641.10542-1-axboe@kernel.dk> References: <20230314171641.10542-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org The ring mapped provided buffer rings rely on the application allocating the memory for the ring, and then the kernel will map it. This generally works fine, but runs into issues on some architectures where we need to be able to ensure that the kernel and application virtual address for the ring play nicely together. This at least impacts architectures that set SHM_COLOUR, but potentially also anyone setting SHMLBA. To use this variant of ring provided buffers, the application need not allocate any memory for the ring. Instead the kernel will do so, and the allocation must subsequently call mmap(2) on the ring with the offset set to: IORING_OFF_PBUF_RING | (bgid << IORING_OFF_PBUF_SHIFT) to get a virtual address for the buffer ring. Normally the application would allocate a suitable piece of memory (and correctly aligned) and simply pass that in via io_uring_buf_reg.ring_addr and the kernel would map it. Outside of the setup differences, the kernel allocate + user mapped provided buffer ring works exactly the same. Signed-off-by: Jens Axboe --- include/uapi/linux/io_uring.h | 17 ++++++ io_uring/io_uring.c | 13 ++++- io_uring/kbuf.c | 99 +++++++++++++++++++++++++++-------- io_uring/kbuf.h | 4 ++ 4 files changed, 109 insertions(+), 24 deletions(-) diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index c3f3ea997f3a..1d59c816a5b8 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -389,6 +389,9 @@ enum { #define IORING_OFF_SQ_RING 0ULL #define IORING_OFF_CQ_RING 0x8000000ULL #define IORING_OFF_SQES 0x10000000ULL +#define IORING_OFF_PBUF_RING 0x80000000ULL +#define IORING_OFF_PBUF_SHIFT 16 +#define IORING_OFF_MMAP_MASK 0xf8000000ULL /* * Filled with the offset for mmap(2) @@ -635,6 +638,20 @@ struct io_uring_buf_ring { }; }; +/* + * Flags for IORING_REGISTER_PBUF_RING. + * + * IOU_PBUF_RING_MMAP: If set, kernel will allocate the memory for the ring. + * The application must not set a ring_addr in struct + * io_uring_buf_reg, instead it must subsequently call + * mmap(2) with the offset set as: + * IORING_OFF_PBUF_RING | (bgid << IORING_OFF_PBUF_SHIFT) + * to get a virtual mapping for the ring. + */ +enum { + IOU_PBUF_RING_MMAP = 1, +}; + /* argument for IORING_(UN)REGISTER_PBUF_RING */ struct io_uring_buf_reg { __u64 ring_addr; diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 3adecebbac71..caebe9c82728 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3283,7 +3283,7 @@ static void *io_uring_validate_mmap_request(struct file *file, struct page *page; void *ptr; - switch (offset) { + switch (offset & IORING_OFF_MMAP_MASK) { case IORING_OFF_SQ_RING: case IORING_OFF_CQ_RING: ptr = ctx->rings; @@ -3291,6 +3291,17 @@ static void *io_uring_validate_mmap_request(struct file *file, case IORING_OFF_SQES: ptr = ctx->sq_sqes; break; + case IORING_OFF_PBUF_RING: { + unsigned int bgid; + + bgid = (offset & ~IORING_OFF_MMAP_MASK) >> IORING_OFF_PBUF_SHIFT; + mutex_lock(&ctx->uring_lock); + ptr = io_pbuf_get_address(ctx, bgid); + mutex_unlock(&ctx->uring_lock); + if (!ptr) + return ERR_PTR(-EINVAL); + break; + } default: return ERR_PTR(-EINVAL); } diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 4b2f4a0ee962..cd1d9dddf58e 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -137,7 +137,8 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len, return NULL; head &= bl->mask; - if (head < IO_BUFFER_LIST_BUF_PER_PAGE) { + /* mmaped buffers are always contig */ + if (bl->is_mmap || head < IO_BUFFER_LIST_BUF_PER_PAGE) { buf = &br->bufs[head]; } else { int off = head & (IO_BUFFER_LIST_BUF_PER_PAGE - 1); @@ -214,15 +215,27 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, if (!nbufs) return 0; - if (bl->is_mapped && bl->buf_nr_pages) { - int j; - + if (bl->is_mapped) { i = bl->buf_ring->tail - bl->head; - for (j = 0; j < bl->buf_nr_pages; j++) - unpin_user_page(bl->buf_pages[j]); - kvfree(bl->buf_pages); - bl->buf_pages = NULL; - bl->buf_nr_pages = 0; + if (bl->is_mmap) { + if (bl->buf_ring) { + struct page *page; + + page = virt_to_head_page(bl->buf_ring); + if (put_page_testzero(page)) + free_compound_page(page); + bl->buf_ring = NULL; + } + bl->is_mmap = 0; + } else if (bl->buf_nr_pages) { + int j; + + for (j = 0; j < bl->buf_nr_pages; j++) + unpin_user_page(bl->buf_pages[j]); + kvfree(bl->buf_pages); + bl->buf_pages = NULL; + bl->buf_nr_pages = 0; + } /* make sure it's seen as empty */ INIT_LIST_HEAD(&bl->buf_list); bl->is_mapped = 0; @@ -482,6 +495,25 @@ static int io_pin_pbuf_ring(struct io_uring_buf_reg *reg, bl->buf_nr_pages = nr_pages; bl->buf_ring = br; bl->is_mapped = 1; + bl->is_mmap = 0; + return 0; +} + +static int io_alloc_pbuf_ring(struct io_uring_buf_reg *reg, + struct io_buffer_list *bl) +{ + gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP; + size_t ring_size; + void *ptr; + + ring_size = reg->ring_entries * sizeof(struct io_uring_buf_ring); + ptr = (void *) __get_free_pages(gfp, get_order(ring_size)); + if (!ptr) + return -ENOMEM; + + bl->buf_ring = ptr; + bl->is_mapped = 1; + bl->is_mmap = 1; return 0; } @@ -496,12 +528,18 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) if (reg.resv[0] || reg.resv[1] || reg.resv[2]) return -EINVAL; - if (reg.flags) - return -EINVAL; - if (!reg.ring_addr) - return -EFAULT; - if (reg.ring_addr & ~PAGE_MASK) + if (reg.flags & ~IOU_PBUF_RING_MMAP) return -EINVAL; + if (!(reg.flags & IOU_PBUF_RING_MMAP)) { + if (!reg.ring_addr) + return -EFAULT; + if (reg.ring_addr & ~PAGE_MASK) + return -EINVAL; + } else { + if (reg.ring_addr) + return -EINVAL; + } + if (!is_power_of_2(reg.ring_entries)) return -EINVAL; @@ -526,17 +564,21 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) return -ENOMEM; } - ret = io_pin_pbuf_ring(®, bl); - if (ret) { - kfree(free_bl); - return ret; - } + if (!(reg.flags & IOU_PBUF_RING_MMAP)) + ret = io_pin_pbuf_ring(®, bl); + else + ret = io_alloc_pbuf_ring(®, bl); - bl->nr_entries = reg.ring_entries; - bl->mask = reg.ring_entries - 1; + if (!ret) { + bl->nr_entries = reg.ring_entries; + bl->mask = reg.ring_entries - 1; - io_buffer_add_list(ctx, bl, reg.bgid); - return 0; + io_buffer_add_list(ctx, bl, reg.bgid); + return 0; + } + + kfree(free_bl); + return ret; } int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) @@ -564,3 +606,14 @@ int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) } return 0; } + +void *io_pbuf_get_address(struct io_ring_ctx *ctx, unsigned long bgid) +{ + struct io_buffer_list *bl; + + bl = io_buffer_get_list(ctx, bgid); + if (!bl || !bl->is_mmap) + return NULL; + + return bl->buf_ring; +} diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index 61b9c7dade9d..d14345ef61fc 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -26,6 +26,8 @@ struct io_buffer_list { /* ring mapped provided buffers */ __u8 is_mapped; + /* ring mapped provided buffers, but mmap'ed by application */ + __u8 is_mmap; }; struct io_buffer { @@ -53,6 +55,8 @@ unsigned int __io_put_kbuf(struct io_kiocb *req, unsigned issue_flags); void io_kbuf_recycle_legacy(struct io_kiocb *req, unsigned issue_flags); +void *io_pbuf_get_address(struct io_ring_ctx *ctx, unsigned long bgid); + static inline void io_kbuf_recycle_ring(struct io_kiocb *req) { /*