From patchwork Tue Aug 27 15:23:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13779674 Received: from mail-io1-f41.google.com (mail-io1-f41.google.com [209.85.166.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 576BB1C5793 for ; Tue, 27 Aug 2024 15:25:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772308; cv=none; b=DoNX9gk8bBUVywrntnt12jQSjsMAv3VRnuCboozqia2qWW6oUycTvOr9J62UW0VzWh6fhW28209Q4vprUHBIKMut5Nhn27SRn+5DwKd+fDxGL6k56RBMMT3NNHbgyH7na7/bOgd3LgpTn7/uo4XI4ebtoOvYxOHBVUIxesiVrew= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772308; c=relaxed/simple; bh=0jZi558aWZcSDcT/7S4KInb4/vAWfH0yjGSS1wU3Nwc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j+vOmKTBzmNYFL2VHnCvr8Jsrp1W4Pb/WXhBReDVtGV+UGX27X0ZZasuokjH5qE8eeDp3LH/2a9R4UjKDjgJzDfRpxcQJ6t4F+fmFAPuHO6mEO9So98n0igl45zFlhkcR27/XaoTCNCjZ2dFznRw3Wos3TgeHVnkIt+QJDRB3r4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=TO8+oO/s; arc=none smtp.client-ip=209.85.166.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="TO8+oO/s" Received: by mail-io1-f41.google.com with SMTP id ca18e2360f4ac-81f96ea9ff7so282496039f.3 for ; Tue, 27 Aug 2024 08:25:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1724772303; x=1725377103; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gxJ8fAaqK7lBS0lsOZ/VIaF2xF3Oms1pyKaM115RzI8=; b=TO8+oO/slziUQ3W68h/wbEn+i50CYxVqYiOqfnQZ9b1+aKysIOaE/idpbjx4N+Bbhk ePZbhJO43KPVB6ucvBNjvGn38TIODYbNF3kzlepTU9AlgnnQAO77RfowOCk/TKNphb16 gxPtNk1to7DanBnf/HgDpOWQZSusHkcfMjSKnR3Fms9AxubWAEsOdqOFprgJs6AuYsIb OlxXjxBwYmvr7E8Ue+zu/I7yl5LPAUNlvI8lAt65OGC2fttVviSJGOQKkZQ0mV03NL9r BQHd2twGiF9UNnAA1/aL9jA7W0j5VZqOyu6wcZgp3XjrjHotdkdQJ+qyKwGHMCo74ocp MZrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724772303; x=1725377103; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gxJ8fAaqK7lBS0lsOZ/VIaF2xF3Oms1pyKaM115RzI8=; b=JQR4za+3+IMoDcq7ds4+DySbLxhFqt9PmMZ3MGiCb15SgjenkWSsTaayV5VUcVcy6l otP8grRfmuj/bLacZN+iAP/hqMes+Ersu+BG3TMmcT2xrmICWzZ6yfoHao9Yv3lt2Qxy 4jPYAWnUY/J5IP71DEzfPoIuE4kjVJyDf3m3oKOwAvi5ncKkayvXS1oN2jhtgwWCJMoX rLLmIe+aDtR0owntwU1LFk7ymg2+rF5sdl5jv8ihai5dOS5fVi4mI0sVoOOGiS2m/mvk npqi1xiYRKMP/2FZds271wHv+qxcLb/qxV72BQ/O6aosMFV6TfX3bCyYc4hmCWJW89bI 862w== X-Gm-Message-State: AOJu0YyEhk51PE6eu04WZzjz65rNB2nX7+cjw1mAMh3J5fxb9oPqMD8M aW4lQTr/oFNfPzBHMJWJlKa0XVccuynJGajfMPidAHgXpF8EJAq2MNO5HOFWVsm3WqHBqh+gYYn r X-Google-Smtp-Source: AGHT+IFAwz7+V3ux+AZP5NMNbT+RO075NiWrJ2uRV1TPhoc6ml38O0lfPveTa/zZgfLHD6HySAIAMA== X-Received: by 2002:a05:6602:6087:b0:7fb:87d6:64b with SMTP id ca18e2360f4ac-8278738a54cmr1939937139f.17.1724772303082; Tue, 27 Aug 2024 08:25:03 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ce7106a4a9sm2678580173.106.2024.08.27.08.25.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Aug 2024 08:25:02 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 1/5] io_uring/kbuf: add io_kbuf_commit() helper Date: Tue, 27 Aug 2024 09:23:05 -0600 Message-ID: <20240827152500.295643-2-axboe@kernel.dk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240827152500.295643-1-axboe@kernel.dk> References: <20240827152500.295643-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Committing the selected ring buffer is currently done in three different spots, combine it into a helper and just call that. Signed-off-by: Jens Axboe --- io_uring/kbuf.c | 7 +++---- io_uring/kbuf.h | 14 ++++++++++---- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index a4bde998f50d..c69f69807885 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -171,9 +171,8 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len, * the transfer completes (or if we get -EAGAIN and must poll of * retry). */ - req->flags &= ~REQ_F_BUFFERS_COMMIT; + io_kbuf_commit(req, bl, 1); req->buf_list = NULL; - bl->head++; } return u64_to_user_ptr(buf->addr); } @@ -297,8 +296,8 @@ int io_buffers_select(struct io_kiocb *req, struct buf_sel_arg *arg, * committed them, they cannot be put back in the queue. */ if (ret > 0) { - req->flags |= REQ_F_BL_NO_RECYCLE; - bl->head += ret; + req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE; + io_kbuf_commit(req, bl, ret); } } else { ret = io_provided_buffers_select(req, &arg->out_len, bl, arg->iovs); diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index ab30aa13fb5e..43c7b18244b3 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -121,15 +121,21 @@ static inline bool io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags) return false; } +static inline void io_kbuf_commit(struct io_kiocb *req, + struct io_buffer_list *bl, int nr) +{ + if (unlikely(!(req->flags & REQ_F_BUFFERS_COMMIT))) + return; + bl->head += nr; + req->flags &= ~REQ_F_BUFFERS_COMMIT; +} + static inline void __io_put_kbuf_ring(struct io_kiocb *req, int nr) { struct io_buffer_list *bl = req->buf_list; if (bl) { - if (req->flags & REQ_F_BUFFERS_COMMIT) { - bl->head += nr; - req->flags &= ~REQ_F_BUFFERS_COMMIT; - } + io_kbuf_commit(req, bl, nr); req->buf_index = bl->bgid; } req->flags &= ~REQ_F_BUFFER_RING; From patchwork Tue Aug 27 15:23:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13779673 Received: from mail-io1-f53.google.com (mail-io1-f53.google.com [209.85.166.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 661B81C6F69 for ; Tue, 27 Aug 2024 15:25:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772308; cv=none; b=lJl16CSgMh5rrTEB0i8HGIeLJVgKy3BXi+4GeOkgjaUtPIYcq7Oex2qoVD8EqNUDrBKSKAzPjrksjaTO84cSfuusSNjnjBz+92Ffc3FB5t13wrJy1CY1oy/77m74E1Ztg/cu3ISUtWEuW5sBl+tppcYMDghcF+gNaGS127m2d3g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772308; c=relaxed/simple; bh=g1Ub/EmToMcijMnn8L53pc67yZ5x5/1dcAHCP9T2tWA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rr8ZXYAIXB82W09PQv70iPijBp8hghZ5s5kKBOpYCtAE352SUfR+Pqetvw9R8kJm5nKEUzIC2Zjvw4bkyrrmEcuaioM7vVHqdTvYOkWh1xSmGZNTwGHn9dDdYzZ9/4z+Odbi/y/WZeLdfVtK6Q6Ghh1sUQRcaeg2FoiHEhoaGsc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=xAT74wQ6; arc=none smtp.client-ip=209.85.166.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="xAT74wQ6" Received: by mail-io1-f53.google.com with SMTP id ca18e2360f4ac-81f86fd93acso210190939f.1 for ; Tue, 27 Aug 2024 08:25:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1724772305; x=1725377105; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=faXHaqUUVKCe2qlotFGZ9Ec0jeJ6g9XXDCxBiP40XIc=; b=xAT74wQ6U3W6CVhMHZrVrz12cu+qI66+ny7C0zF+3k3qZxQiTVCBQzsf78Ecv4OFK5 kR53F+G9CXS5x8lq8MCFrSZkVKWGSKFeV4aTewjIEUzCJy9i07fxoWOo+ArumY1/bdHY o3ZXYqn0rds9TqQgRRfMcfOeR5EfGwxZmHYUieeCSmZLcNSK1LtcEXFHO5J381tH2fO1 vJ+Xp/oBk0M5bjt/9BQ/CItp5l+7PRve5QQHUNPWadLYY9ZkUi4IR45EBJ+yJWK62qUP yPHOjyl5iubsnLW+XW4ZMHkujLum7+hZrZxAFz6S7wlSwqkgc62AKc6LdvFH37/05PVs OgCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724772305; x=1725377105; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=faXHaqUUVKCe2qlotFGZ9Ec0jeJ6g9XXDCxBiP40XIc=; b=AIUgWVaKLTk+mly1O9Hh3wiwxgkaEFR1eirCQsfXgjzn7XpIkCYSUTpb9IyKnaMode hjXHwKsJl9GPvcE/yHGySVRSiRF6Liq9jAAb8umqzJFWtNNSNkYOsuf4f4ps9H8A0PAC lNyMzI6135eK1gfu48woBHbcrXUDfENMSLRiC1tJn64jBUDKttsdQrfZn7iR93pHvfvf gY1FGfBLdzQvnwBq3yVn7Rph6d2SOm3fYVEAt34I6etagkTD1gXLbi2r6goSMlG1J20a YWW2AoysL0IAV9UV+GW2eofi9UZeT33+bgfAl/76vWrDLaaqvqoy6PsFmboUN8HLeadQ MT2w== X-Gm-Message-State: AOJu0YwAHlTbtAt8jWESdyYsfnhNt9bBQ4RKyWh6SB86TbTf4xT0YHWV V0/AJe/KzeOq1TNPqALZulMBJx8uCgLw21z6jzz4yhnDE9mIzKIOjTo5ZtPb6coBwUGpDDp9i9N X X-Google-Smtp-Source: AGHT+IGLVNbPnKjPqR2gZRXJ1ohIV2PWdehBfNjtdJ88SphwKmH1wnM9NCdWCw4C7evafUICILe99g== X-Received: by 2002:a05:6e02:1489:b0:39d:24af:aff8 with SMTP id e9e14a558f8ab-39e63fd96fcmr22482115ab.7.1724772304625; Tue, 27 Aug 2024 08:25:04 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ce7106a4a9sm2678580173.106.2024.08.27.08.25.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Aug 2024 08:25:03 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/5] io_uring/kbuf: move io_ring_head_to_buf() to kbuf.h Date: Tue, 27 Aug 2024 09:23:06 -0600 Message-ID: <20240827152500.295643-3-axboe@kernel.dk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240827152500.295643-1-axboe@kernel.dk> References: <20240827152500.295643-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for using this helper in kbuf.h as well, move it there and turn it into a macro. Signed-off-by: Jens Axboe --- io_uring/kbuf.c | 6 ------ io_uring/kbuf.h | 3 +++ 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index c69f69807885..297c1d2c3c27 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -132,12 +132,6 @@ static int io_provided_buffers_select(struct io_kiocb *req, size_t *len, return 0; } -static struct io_uring_buf *io_ring_head_to_buf(struct io_uring_buf_ring *br, - __u16 head, __u16 mask) -{ - return &br->bufs[head & mask]; -} - static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len, struct io_buffer_list *bl, unsigned int issue_flags) diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index 43c7b18244b3..4c34ff3144b9 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -121,6 +121,9 @@ static inline bool io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags) return false; } +/* Mapped buffer ring, return io_uring_buf from head */ +#define io_ring_head_to_buf(br, head, mask) &(br)->bufs[(head) & (mask)] + static inline void io_kbuf_commit(struct io_kiocb *req, struct io_buffer_list *bl, int nr) { From patchwork Tue Aug 27 15:23:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13779675 Received: from mail-io1-f46.google.com (mail-io1-f46.google.com [209.85.166.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C00351BDA93 for ; Tue, 27 Aug 2024 15:25:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772309; cv=none; b=jToJjf8gT2gc5gQ9/cZYZTcqSJlJNbmAu8KCRaT3HBM+XzwCdoalvqCWeQ3mKqciUTS4OnWlgTdTzmVFmtDEmE0zkv9SmbOs2ASvy1fjhn9EEjZhOUBpD053Go7rnTGzPS5fNy2NZSZNjmNOzrlWs8tVNGYClCvWwlU/6IcuS6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772309; c=relaxed/simple; bh=qIbW6FfeBAgiAEVPpNmmbKusruIunvSfddy/MHjgqTY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BkF4sAu/mSrLZiYnm6EHpMYRSsdBhksMUa24xGj0mlFY5tPIuaMCPIBwb0ZTNsmT3wtGjAz5tQ+1pIhMwaDVg1SZn86f2wTUpkuuY3Ft0zdU9kRWdO8eekzAp9HvzYtgi54fxyd42I2zM9+8VIomtrF+HUVNQuZMkG8bmzS7Qw8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=Axfq3bfN; arc=none smtp.client-ip=209.85.166.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="Axfq3bfN" Received: by mail-io1-f46.google.com with SMTP id ca18e2360f4ac-824c85e414bso194902039f.2 for ; Tue, 27 Aug 2024 08:25:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1724772305; x=1725377105; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YCOM0Q4MgNqZ63rGSpWE9o76VU8Wgv7xhomNVx1Fsek=; b=Axfq3bfNYF0DWzhDlHaxFnENZdR7EHS4Dj8mBDAl5i+zXmvL8TTwSmp5LGO0hk7K8z nGTOjQU51ybqxUIgRGlTeFYLYYmQ5a0j+P0qcIGTz7FXwweAsyOt8IXKdykXsHRNq3hB nIwq/8AunIBiIUakRsMNPfZRBt2fXck1EL1xD7juLTokn3dEG3zD4TNldeAeOBXEDR0s yLhO8P/dUrwb0IPOPMNjsYjdQnJ9XaVhj6EizWDloBkbUeL5D5wrM1u8gzDW4jTHbPw7 ezyqNJhyH7e3BzV3W5s3UnqnPrS+idjrNv95t+zae8U+so8cb71oQvWiOVWtzIAftkMH t4cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724772305; x=1725377105; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YCOM0Q4MgNqZ63rGSpWE9o76VU8Wgv7xhomNVx1Fsek=; b=PQ4g4Bc0gqd8AuopBeAezrhzMbccmI6HXHBv7WiEj/C9bBPuoAdWKG08rUF0UBQJzx 7umJYaSDeHXU9RvzPgerRfokfiEGRyQzfN45viqOcVS9q/+lKWqrLTxc0f2m3ayqFga3 Yoe30i0QFyVLF0yVajz9tfxobOLRYmOw8qiLuGKnIS/sy18LPGds1fKb/GsJ+FJ10Vvu b3QS3J3zKlm0KJRYDMsVDa0ZIe4P2kA2PYp6N+iynUrh1TIAYnCFexMuBKwaKkovPdfq 6+Y4qMjnehe8PztPK3GpSMrvpiyHOJptlVNfrIP8wi4zMze3CT4UQUnhjSoe9brVNGSf ZIBA== X-Gm-Message-State: AOJu0Yx/3pAhJl0fVms7esVFtQfwQy+Vz88tTdltgO0wvDRW5huOJWch hV9IUEpEyQ4stEeYvnwEyZGNl8pYyZU8o/TdPR+8ysnWGCrr2SxqBCR9Ae2DTPfDvCMvUDgy1Wh q X-Google-Smtp-Source: AGHT+IFHjxvo9IE61KwKoCVK+H+WtTO66IlkuHx5RV38tA7zb9fRzB51Z39vNqiVZwxZB/5rHMY0iA== X-Received: by 2002:a05:6602:3fcb:b0:805:afed:cea1 with SMTP id ca18e2360f4ac-82787376a19mr1579224939f.14.1724772305391; Tue, 27 Aug 2024 08:25:05 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ce7106a4a9sm2678580173.106.2024.08.27.08.25.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Aug 2024 08:25:04 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/5] Revert "io_uring: Require zeroed sqe->len on provided-buffers send" Date: Tue, 27 Aug 2024 09:23:07 -0600 Message-ID: <20240827152500.295643-4-axboe@kernel.dk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240827152500.295643-1-axboe@kernel.dk> References: <20240827152500.295643-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This reverts commit 79996b45f7b28c0e3e08a95bab80119e95317e28. Revert the change that restricts a send provided buffer to be zero, so it will always consume the whole buffer. This is strictly needed for partial consumption, as the send may very well be a subset of the current buffer. In fact, that's the intended use case. For non-incremental provided buffer rings, an application should set sqe->len carefully to avoid the potential issue described in the reverted commit. It is recommended that '0' still be set for len for that case, if the application is set on maintaining more than 1 send inflight for the same socket. This is somewhat of a nonsensical thing to do. Signed-off-by: Jens Axboe --- io_uring/net.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index dc83a35b8af4..cc81bcacdc1b 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -434,8 +434,6 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) sr->buf_group = req->buf_index; req->buf_list = NULL; } - if (req->flags & REQ_F_BUFFER_SELECT && sr->len) - return -EINVAL; #ifdef CONFIG_COMPAT if (req->ctx->compat) @@ -599,7 +597,7 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags) if (io_do_buffer_select(req)) { struct buf_sel_arg arg = { .iovs = &kmsg->fast_iov, - .max_len = INT_MAX, + .max_len = min_not_zero(sr->len, INT_MAX), .nr_iovs = 1, }; From patchwork Tue Aug 27 15:23:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13779676 Received: from mail-io1-f47.google.com (mail-io1-f47.google.com [209.85.166.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B89AE1C4EEA for ; Tue, 27 Aug 2024 15:25:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772310; cv=none; b=PZvZq7+5nKfOWEJ++OU/E0UnOvEht+wGM1HbFcVGva8mLu1GyEfv2g8RLDySHwKz+fHdscgDFFVBXpuFCQggWPjSHhYkYq4I7peIBluGi0wm46KiAAylYZhpatyK/aMSVQkFVi9zz5hBHYmJGQFMNOlDe2ci/gR0uNRK2Igv+lw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772310; c=relaxed/simple; bh=GW8LwMzQagLv1BPw7rcgHZJWtj1ESgMqxug57BA9OxU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=f6aBIWoWBaKubc3kc/yOXfnp4oSKVm9ceuww1TQXcG7OJ4CqTXfQpqF4ySoiqQWhBTT2xryuj3R5kFmOcNNW4qhsFP8SHnZqEyG6+Dycp0RSXqc3bEaj6pkQjwzrGdAmdR1+PgRTnGyD0jMATMDP4XwW+W1sKfagHEsUPy6Ub/Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=IQ034/Lg; arc=none smtp.client-ip=209.85.166.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="IQ034/Lg" Received: by mail-io1-f47.google.com with SMTP id ca18e2360f4ac-81fe38c7255so362648839f.1 for ; Tue, 27 Aug 2024 08:25:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1724772307; x=1725377107; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Lc6n0lhFH9Y4btH72TiHY9bc+2TXbO2T78USv1wL81w=; b=IQ034/LgOZaWH14tY49G184CvRm9orVhkNBLs4RdnKf5vWVa6euL4pN6KCWaqNcHNR M3MaDKmv1ROJhbNvInhqRkRi7iQ5j8rq8Z9gEMmVYmJVMSeEZ5smc6tTEwQkvyBAYkLH pd9Hk+BJ+tkT0jm3l1E8gOs3RvWQ5gxlhsdlrXfwU0LttYsa+QxUanIcpFc1dXHOhQbO VKSxCVyqxC1A7sjXOkSPw6dRUzUoLPqhLAlpXRKO41yH4YXG58cl5WdJWVB8vsCESJLB TwAtNY7hGLNBxbsXvfKjjNLcmtMcWyjrKGECqSz4j3L78JVEcSeoKn0x71Z+RwJDXm/G f8PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724772307; x=1725377107; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Lc6n0lhFH9Y4btH72TiHY9bc+2TXbO2T78USv1wL81w=; b=MWw4AKecc0/aoZ+uU16d6ZImUxzFTcZ7KoFeT645M+yiqxwLTfKeMOFnCZnN0wNaWz mLzCk3fXxg242TSIbCZNvouny522keTjSHRxXMLrs8Y/yiIGJnf2h7bvxWrB8v1bJCsQ mieSRevj1ZQAQZuh1ksDTVQwjIQ3A8pOt2uBbdp3OiJmrYTEXhbN1hAS/AZ2KQckPKHv HuYfAf4HCfYk8q1cIq1z9GonVn6lG7399poswz/q5rEwEOgmv3lHPsSOl506gh6SDEde FV/i7hQDEAiTVTB5I7bvlh5E5ioXnmw8rBZ7s/E/y+g01cRGAZW6J8ss5n972tOYypDt ou2g== X-Gm-Message-State: AOJu0Yx0VPbSMtIsBthmkH3K3c+DZi6xQO+nsgHZAhMuekGwZhDG5F8u Xv4SRH2rz8t+TgWEjgdk9300HOUpavWdP2vVGupiXo5JSlUjqh+aWZQHutU5MvN7fknTppyeNs/ 1 X-Google-Smtp-Source: AGHT+IF9iuInm1JxmO0PNZCaSoz1rO31b9vCJIZrkGAYjZgN5gejY6nonj5XHQO9P6wLJHNiUkIxWg== X-Received: by 2002:a05:6602:2d8e:b0:825:20ed:c3f5 with SMTP id ca18e2360f4ac-829f1147b8emr445140139f.0.1724772306869; Tue, 27 Aug 2024 08:25:06 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ce7106a4a9sm2678580173.106.2024.08.27.08.25.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Aug 2024 08:25:05 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 4/5] io_uring/kbuf: pass in 'len' argument for buffer commit Date: Tue, 27 Aug 2024 09:23:08 -0600 Message-ID: <20240827152500.295643-5-axboe@kernel.dk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240827152500.295643-1-axboe@kernel.dk> References: <20240827152500.295643-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for needing the consumed length, pass in the length being completed. Unused right now, but will be used when it is possible to partially consume a buffer. Signed-off-by: Jens Axboe --- io_uring/io_uring.c | 2 +- io_uring/kbuf.c | 10 +++++----- io_uring/kbuf.h | 33 +++++++++++++++++---------------- io_uring/net.c | 8 ++++---- io_uring/rw.c | 8 ++++---- 5 files changed, 31 insertions(+), 30 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 80bb6e2374e9..1aca501efaf6 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -904,7 +904,7 @@ void io_req_defer_failed(struct io_kiocb *req, s32 res) lockdep_assert_held(&req->ctx->uring_lock); req_set_fail(req); - io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED)); + io_req_set_res(req, res, io_put_kbuf(req, res, IO_URING_F_UNLOCKED)); if (def->fail) def->fail(req); io_req_complete_defer(req); diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 297c1d2c3c27..55d01861d8c5 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -70,7 +70,7 @@ bool io_kbuf_recycle_legacy(struct io_kiocb *req, unsigned issue_flags) return true; } -void __io_put_kbuf(struct io_kiocb *req, unsigned issue_flags) +void __io_put_kbuf(struct io_kiocb *req, int len, unsigned issue_flags) { /* * We can add this buffer back to two lists: @@ -88,12 +88,12 @@ void __io_put_kbuf(struct io_kiocb *req, unsigned issue_flags) struct io_ring_ctx *ctx = req->ctx; spin_lock(&ctx->completion_lock); - __io_put_kbuf_list(req, &ctx->io_buffers_comp); + __io_put_kbuf_list(req, len, &ctx->io_buffers_comp); spin_unlock(&ctx->completion_lock); } else { lockdep_assert_held(&req->ctx->uring_lock); - __io_put_kbuf_list(req, &req->ctx->io_buffers_cache); + __io_put_kbuf_list(req, len, &req->ctx->io_buffers_cache); } } @@ -165,7 +165,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len, * the transfer completes (or if we get -EAGAIN and must poll of * retry). */ - io_kbuf_commit(req, bl, 1); + io_kbuf_commit(req, bl, *len, 1); req->buf_list = NULL; } return u64_to_user_ptr(buf->addr); @@ -291,7 +291,7 @@ int io_buffers_select(struct io_kiocb *req, struct buf_sel_arg *arg, */ if (ret > 0) { req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE; - io_kbuf_commit(req, bl, ret); + io_kbuf_commit(req, bl, arg->out_len, ret); } } else { ret = io_provided_buffers_select(req, &arg->out_len, bl, arg->iovs); diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index 4c34ff3144b9..b41e2a0a0505 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -77,7 +77,7 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg); int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg); int io_register_pbuf_status(struct io_ring_ctx *ctx, void __user *arg); -void __io_put_kbuf(struct io_kiocb *req, unsigned issue_flags); +void __io_put_kbuf(struct io_kiocb *req, int len, unsigned issue_flags); bool io_kbuf_recycle_legacy(struct io_kiocb *req, unsigned issue_flags); @@ -125,7 +125,7 @@ static inline bool io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags) #define io_ring_head_to_buf(br, head, mask) &(br)->bufs[(head) & (mask)] static inline void io_kbuf_commit(struct io_kiocb *req, - struct io_buffer_list *bl, int nr) + struct io_buffer_list *bl, int len, int nr) { if (unlikely(!(req->flags & REQ_F_BUFFERS_COMMIT))) return; @@ -133,22 +133,22 @@ static inline void io_kbuf_commit(struct io_kiocb *req, req->flags &= ~REQ_F_BUFFERS_COMMIT; } -static inline void __io_put_kbuf_ring(struct io_kiocb *req, int nr) +static inline void __io_put_kbuf_ring(struct io_kiocb *req, int len, int nr) { struct io_buffer_list *bl = req->buf_list; if (bl) { - io_kbuf_commit(req, bl, nr); + io_kbuf_commit(req, bl, len, nr); req->buf_index = bl->bgid; } req->flags &= ~REQ_F_BUFFER_RING; } -static inline void __io_put_kbuf_list(struct io_kiocb *req, +static inline void __io_put_kbuf_list(struct io_kiocb *req, int len, struct list_head *list) { if (req->flags & REQ_F_BUFFER_RING) { - __io_put_kbuf_ring(req, 1); + __io_put_kbuf_ring(req, len, 1); } else { req->buf_index = req->kbuf->bgid; list_add(&req->kbuf->list, list); @@ -163,11 +163,12 @@ static inline void io_kbuf_drop(struct io_kiocb *req) if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING))) return; - __io_put_kbuf_list(req, &req->ctx->io_buffers_comp); + /* len == 0 is fine here, non-ring will always drop all of it */ + __io_put_kbuf_list(req, 0, &req->ctx->io_buffers_comp); } -static inline unsigned int __io_put_kbufs(struct io_kiocb *req, int nbufs, - unsigned issue_flags) +static inline unsigned int __io_put_kbufs(struct io_kiocb *req, int len, + int nbufs, unsigned issue_flags) { unsigned int ret; @@ -176,21 +177,21 @@ static inline unsigned int __io_put_kbufs(struct io_kiocb *req, int nbufs, ret = IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT); if (req->flags & REQ_F_BUFFER_RING) - __io_put_kbuf_ring(req, nbufs); + __io_put_kbuf_ring(req, len, nbufs); else - __io_put_kbuf(req, issue_flags); + __io_put_kbuf(req, len, issue_flags); return ret; } -static inline unsigned int io_put_kbuf(struct io_kiocb *req, +static inline unsigned int io_put_kbuf(struct io_kiocb *req, int len, unsigned issue_flags) { - return __io_put_kbufs(req, 1, issue_flags); + return __io_put_kbufs(req, len, 1, issue_flags); } -static inline unsigned int io_put_kbufs(struct io_kiocb *req, int nbufs, - unsigned issue_flags) +static inline unsigned int io_put_kbufs(struct io_kiocb *req, int len, + int nbufs, unsigned issue_flags) { - return __io_put_kbufs(req, nbufs, issue_flags); + return __io_put_kbufs(req, len, nbufs, issue_flags); } #endif diff --git a/io_uring/net.c b/io_uring/net.c index cc81bcacdc1b..f10f5a22d66a 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -497,11 +497,11 @@ static inline bool io_send_finish(struct io_kiocb *req, int *ret, unsigned int cflags; if (!(sr->flags & IORING_RECVSEND_BUNDLE)) { - cflags = io_put_kbuf(req, issue_flags); + cflags = io_put_kbuf(req, *ret, issue_flags); goto finish; } - cflags = io_put_kbufs(req, io_bundle_nbufs(kmsg, *ret), issue_flags); + cflags = io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, *ret), issue_flags); if (bundle_finished || req->flags & REQ_F_BL_EMPTY) goto finish; @@ -842,13 +842,13 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret, cflags |= IORING_CQE_F_SOCK_NONEMPTY; if (sr->flags & IORING_RECVSEND_BUNDLE) { - cflags |= io_put_kbufs(req, io_bundle_nbufs(kmsg, *ret), + cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, *ret), issue_flags); /* bundle with no more immediate buffers, we're done */ if (req->flags & REQ_F_BL_EMPTY) goto finish; } else { - cflags |= io_put_kbuf(req, issue_flags); + cflags |= io_put_kbuf(req, *ret, issue_flags); } /* diff --git a/io_uring/rw.c b/io_uring/rw.c index c004d21e2f12..f5e0694538b9 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -511,7 +511,7 @@ void io_req_rw_complete(struct io_kiocb *req, struct io_tw_state *ts) io_req_io_end(req); if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)) - req->cqe.flags |= io_put_kbuf(req, 0); + req->cqe.flags |= io_put_kbuf(req, req->cqe.res, 0); io_req_rw_cleanup(req, 0); io_req_task_complete(req, ts); @@ -593,7 +593,7 @@ static int kiocb_done(struct io_kiocb *req, ssize_t ret, */ io_req_io_end(req); io_req_set_res(req, final_ret, - io_put_kbuf(req, issue_flags)); + io_put_kbuf(req, ret, issue_flags)); io_req_rw_cleanup(req, issue_flags); return IOU_OK; } @@ -975,7 +975,7 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags) * Put our buffer and post a CQE. If we fail to post a CQE, then * jump to the termination path. This request is then done. */ - cflags = io_put_kbuf(req, issue_flags); + cflags = io_put_kbuf(req, ret, issue_flags); rw->len = 0; /* similarly to above, reset len to 0 */ if (io_req_post_cqe(req, ret, cflags | IORING_CQE_F_MORE)) { @@ -1167,7 +1167,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) if (!smp_load_acquire(&req->iopoll_completed)) break; nr_events++; - req->cqe.flags = io_put_kbuf(req, 0); + req->cqe.flags = io_put_kbuf(req, req->cqe.res, 0); if (req->opcode != IORING_OP_URING_CMD) io_req_rw_cleanup(req, 0); } From patchwork Tue Aug 27 15:23:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13779677 Received: from mail-il1-f182.google.com (mail-il1-f182.google.com [209.85.166.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38AF81C5793 for ; Tue, 27 Aug 2024 15:25:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772311; cv=none; b=tCQ1Qe1tW+uVT8ibBAQrmN+ZpNAH9MWh4f395LLeccaisPk59cRjrxv15vjdOTBHHiX76hQoZvW5YBuaUgMTzS7ldFXvWIKh0Mw3SKB3nNAK6PyEVqA2IdypMSTcfu41DxR5OLKUYs+Xdej809JGC+c+RqT74LnCbL/JO7I30nM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772311; c=relaxed/simple; bh=P3O4IqiNqoh+amtFW3VfS+BQlDuyp+6Fy+8B1w7cizc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KtP9Dz9e2yJMxDZvIDwtW0qJwloZTLxZejz3RhJHNnys7UT9xJZsONqcN6WlHFh47RLnibz5PwjaApPHFT8Ub7xjhygMHj67bpio8r0YZp6J2Tz1/ifcLXbHg47n/I00qiR1ZV08zPzwHv4zo5CegFjmTsTRPrpRwfw+XDm0eIw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=f7Lw+Dtq; arc=none smtp.client-ip=209.85.166.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="f7Lw+Dtq" Received: by mail-il1-f182.google.com with SMTP id e9e14a558f8ab-39d26a8f9dbso19001775ab.1 for ; Tue, 27 Aug 2024 08:25:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1724772308; x=1725377108; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8ns9wHSpwlcuQs+dKy4aVfidYTrkb+k6Wj9gTDn2rE4=; b=f7Lw+Dtq3FsQwuOAAzrPDSJZdWMJrmHGYqw0mKYuEpO54Lo+pA002wfo/PwRdTo4wi TZPdIkPvIzGZ9tQJCKltkmiM1/kMnl+K+e+8nPaMBiItOxdNhsJ8eBG4DGlP4aT2xciA GjCk8NZv0XHa37T5WdR5PYe+IXluM0znyymxdZCwwkMhKcLD/BJ8+2FfRyL/Cat5hubQ CCYgwYlqH0sT0yZgbyKoz1GUgfddacvKaYFXv25d7UdBKLiqD59IdDc9nPvSkjzga4Y4 JmptvVKroCg/u81MCw+09NuLrAAvKbW2Bmo9fDQze32ZdqSBdyGGvlOZWslFXetIRB3K 5SHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724772308; x=1725377108; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8ns9wHSpwlcuQs+dKy4aVfidYTrkb+k6Wj9gTDn2rE4=; b=BiMVL4FXuhsNmwX6mZXx7OsOT4PgdiJ08EFM0dO5T2R4lbqWhihUl6qYxzkCWGkMJ6 a0jM9aH+02johwMklkWqh+X75ecMcZJyR8ZMBfu60MxB2HM+jj/oxys7KoRTsoTFlDvq 2MxBOH5W4/tG6OrmxjECsxNUlpOCknNY8XcwTBexUTEBkTRezmPGgpoG4WEkJ07wSx+1 NY5d2VQhSVP2Quumg3goxuU89o49vUAed6H8zBuf4SpUKZe6HC9Vx9cFWMpTYIELcNFN L5snzsbZYa/sIzHiIfjKizYpJUA2M1LxYbg7zOZtTbL0rzBY34YI5NH6br3Ot2y0uTg5 6MkA== X-Gm-Message-State: AOJu0Yx8FiPB15zkp26ML6Xt5hmqI/kwBYnY+JDAFYvV8J+Cnm4MKO77 j7NRc0ArOGv87iAS3pyjpiMM+prnSFJxSXNwNDBO/y29Sbo+1cbZ82qtqq/2oQGb5e5iUFP31Yh I X-Google-Smtp-Source: AGHT+IH+K6el7+WaoYVhrhPZ4XyMxtSbekS5mh8y6DLqQQgCpO0SbVMTb+l3KNHg9uXiUuT+fFIHkg== X-Received: by 2002:a05:6e02:16c6:b0:39b:20d8:601e with SMTP id e9e14a558f8ab-39e63dd8cbamr39610795ab.3.1724772307730; Tue, 27 Aug 2024 08:25:07 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4ce7106a4a9sm2678580173.106.2024.08.27.08.25.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Aug 2024 08:25:07 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 5/5] io_uring/kbuf: add support for incremental buffer consumption Date: Tue, 27 Aug 2024 09:23:09 -0600 Message-ID: <20240827152500.295643-6-axboe@kernel.dk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240827152500.295643-1-axboe@kernel.dk> References: <20240827152500.295643-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 By default, any recv/read operation that uses provided buffers will consume at least 1 buffer fully (and maybe more, in case of bundles). This adds support for incremental consumption, meaning that an application may add large buffers, and each read/recv will just consume the part of the buffer that it needs. For example, let's say an application registers 1MB buffers in a provided buffer ring, for streaming receives. If it gets a short recv, then the full 1MB buffer will be consumed and passed back to the application. With incremental consumption, only the part that was actually used is consumed, and the buffer remains the current one. This means that both the application and the kernel needs to keep track of what the current receive point is. Each recv will still pass back a buffer ID and the size consumed, the only difference is that before the next receive would always be the next buffer in the ring. Now the same buffer ID may return multiple receives, each at an offset into that buffer from where the previous receive left off. Example: Application registers a provided buffer ring, and adds two 32K buffers to the ring. Buffer1 address: 0x1000000 (buffer ID 0) Buffer2 address: 0x2000000 (buffer ID 1) A recv completion is received with the following values: cqe->res 0x1000 (4k bytes received) cqe->flags 0x11 (CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 0) and the application now knows that 4096b of data is available at 0x1000000, the start of that buffer, and that more data from this buffer will be coming. Now the next receive comes in: cqe->res 0x2010 (8k bytes received) cqe->flags 0x11 (CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 0) which tells the application that 8k is available where the last completion left off, at 0x1001000. Next completion is: cqe->res 0x5000 (20k bytes received) cqe->flags 0x1 (CQE_F_BUFFER set, buffer ID 0) and the application now knows that 20k of data is available at 0x1003000, which is where the previous receive ended. CQE_F_BUF_MORE isn't set, as no more data is available in this buffer ID. The next completion is then: cqe->res 0x1000 (4k bytes received) cqe->flags 0x10001 (CQE_F_BUFFER|CQE_F_BUF_MORE set, buffer ID 1) which tells the application that buffer ID 1 is now the current one, hence there's 4k of valid data at 0x2000000. 0x2001000 will be the next receive point for this buffer ID. When a buffer will be reused by future CQE completions, IORING_CQE_BUF_MORE will be set in cqe->flags. This tells the application that the kernel isn't done with the buffer yet, and that it should expect more completions for this buffer ID. Will only be set by provided buffer rings setup with IOU_PBUF_RING INC, as that's the only type of buffer that will see multiple consecutive completions for the same buffer ID. For any other provided buffer type, any completion that passes back a buffer to the application is final. Once a buffer has been fully consumed, the buffer ring head is incremented and the next receive will indicate the next buffer ID in the CQE cflags. On the send side, the application can manage how much data is sent from an existing buffer by setting sqe->len to the desired send length. An application can request incremental consumption by setting IOU_PBUF_RING_INC in the provided buffer ring registration. Outside of that, any provided buffer ring setup and buffer additions is done like before, no changes there. The only change is in how an application may see multiple completions for the same buffer ID, hence needing to know where the next receive will happen. Note that like existing provided buffer rings, this should not be used with IOSQE_ASYNC, as both really require the ring to remain locked over the duration of the buffer selection and the operation completion. It will consume a buffer otherwise regardless of the size of the IO done. Signed-off-by: Jens Axboe --- include/uapi/linux/io_uring.h | 18 +++++++++++++++ io_uring/kbuf.c | 42 +++++++++++++++++++++++++---------- io_uring/kbuf.h | 42 ++++++++++++++++++++++++++++------- 3 files changed, 82 insertions(+), 20 deletions(-) diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 042eab793e26..a275f91d2ac0 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -440,11 +440,21 @@ struct io_uring_cqe { * IORING_CQE_F_SOCK_NONEMPTY If set, more data to read after socket recv * IORING_CQE_F_NOTIF Set for notification CQEs. Can be used to distinct * them from sends. + * IORING_CQE_F_BUF_MORE If set, the buffer ID set in the completion will get + * more completions. In other words, the buffer is being + * partially consumed, and will be used by the kernel for + * more completions. This is only set for buffers used via + * the incremental buffer consumption, as provided by + * a ring buffer setup with IOU_PBUF_RING_INC. For any + * other provided buffer type, all completions with a + * buffer passed back is automatically returned to the + * application. */ #define IORING_CQE_F_BUFFER (1U << 0) #define IORING_CQE_F_MORE (1U << 1) #define IORING_CQE_F_SOCK_NONEMPTY (1U << 2) #define IORING_CQE_F_NOTIF (1U << 3) +#define IORING_CQE_F_BUF_MORE (1U << 4) #define IORING_CQE_BUFFER_SHIFT 16 @@ -716,9 +726,17 @@ struct io_uring_buf_ring { * mmap(2) with the offset set as: * IORING_OFF_PBUF_RING | (bgid << IORING_OFF_PBUF_SHIFT) * to get a virtual mapping for the ring. + * IOU_PBUF_RING_INC: If set, buffers consumed from this buffer ring can be + * consumed incrementally. Normally one (or more) buffers + * are fully consumed. With incremental consumptions, it's + * feasible to register big ranges of buffers, and each + * use of it will consume only as much as it needs. This + * requires that both the kernel and application keep + * track of where the current read/recv index is at. */ enum io_uring_register_pbuf_ring_flags { IOU_PBUF_RING_MMAP = 1, + IOU_PBUF_RING_INC = 2, }; /* argument for IORING_(UN)REGISTER_PBUF_RING */ diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 55d01861d8c5..1f503bcc9c9f 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -212,14 +212,25 @@ static int io_ring_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg, buf = io_ring_head_to_buf(br, head, bl->mask); if (arg->max_len) { u32 len = READ_ONCE(buf->len); - size_t needed; if (unlikely(!len)) return -ENOBUFS; - needed = (arg->max_len + len - 1) / len; - needed = min_not_zero(needed, (size_t) PEEK_MAX_IMPORT); - if (nr_avail > needed) - nr_avail = needed; + /* + * Limit incremental buffers to 1 segment. No point trying + * to peek ahead and map more than we need, when the buffers + * themselves should be large when setup with + * IOU_PBUF_RING_INC. + */ + if (bl->flags & IOBL_INC) { + nr_avail = 1; + } else { + size_t needed; + + needed = (arg->max_len + len - 1) / len; + needed = min_not_zero(needed, (size_t) PEEK_MAX_IMPORT); + if (nr_avail > needed) + nr_avail = needed; + } } /* @@ -244,16 +255,21 @@ static int io_ring_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg, req->buf_index = buf->bid; do { - /* truncate end piece, if needed */ - if (buf->len > arg->max_len) - buf->len = arg->max_len; + u32 len = buf->len; + + /* truncate end piece, if needed, for non partial buffers */ + if (len > arg->max_len) { + len = arg->max_len; + if (!(bl->flags & IOBL_INC)) + buf->len = len; + } iov->iov_base = u64_to_user_ptr(buf->addr); - iov->iov_len = buf->len; + iov->iov_len = len; iov++; - arg->out_len += buf->len; - arg->max_len -= buf->len; + arg->out_len += len; + arg->max_len -= len; if (!arg->max_len) break; @@ -675,7 +691,7 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) if (reg.resv[0] || reg.resv[1] || reg.resv[2]) return -EINVAL; - if (reg.flags & ~IOU_PBUF_RING_MMAP) + if (reg.flags & ~(IOU_PBUF_RING_MMAP | IOU_PBUF_RING_INC)) return -EINVAL; if (!(reg.flags & IOU_PBUF_RING_MMAP)) { if (!reg.ring_addr) @@ -713,6 +729,8 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) if (!ret) { bl->nr_entries = reg.ring_entries; bl->mask = reg.ring_entries - 1; + if (reg.flags & IOU_PBUF_RING_INC) + bl->flags |= IOBL_INC; io_buffer_add_list(ctx, bl, reg.bgid); return 0; diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index b41e2a0a0505..36aadfe5ac00 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -9,6 +9,9 @@ enum { IOBL_BUF_RING = 1, /* ring mapped provided buffers, but mmap'ed by application */ IOBL_MMAP = 2, + /* buffers are consumed incrementally rather than always fully */ + IOBL_INC = 4, + }; struct io_buffer_list { @@ -124,24 +127,45 @@ static inline bool io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags) /* Mapped buffer ring, return io_uring_buf from head */ #define io_ring_head_to_buf(br, head, mask) &(br)->bufs[(head) & (mask)] -static inline void io_kbuf_commit(struct io_kiocb *req, +static inline bool io_kbuf_commit(struct io_kiocb *req, struct io_buffer_list *bl, int len, int nr) { if (unlikely(!(req->flags & REQ_F_BUFFERS_COMMIT))) - return; - bl->head += nr; + return true; + req->flags &= ~REQ_F_BUFFERS_COMMIT; + + if (unlikely(len < 0)) + return true; + + if (bl->flags & IOBL_INC) { + struct io_uring_buf *buf; + + buf = io_ring_head_to_buf(bl->buf_ring, bl->head, bl->mask); + if (WARN_ON_ONCE(len > buf->len)) + len = buf->len; + buf->len -= len; + if (buf->len) { + buf->addr += len; + return false; + } + } + + bl->head += nr; + return true; } -static inline void __io_put_kbuf_ring(struct io_kiocb *req, int len, int nr) +static inline bool __io_put_kbuf_ring(struct io_kiocb *req, int len, int nr) { struct io_buffer_list *bl = req->buf_list; + bool ret = true; if (bl) { - io_kbuf_commit(req, bl, len, nr); + ret = io_kbuf_commit(req, bl, len, nr); req->buf_index = bl->bgid; } req->flags &= ~REQ_F_BUFFER_RING; + return ret; } static inline void __io_put_kbuf_list(struct io_kiocb *req, int len, @@ -176,10 +200,12 @@ static inline unsigned int __io_put_kbufs(struct io_kiocb *req, int len, return 0; ret = IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT); - if (req->flags & REQ_F_BUFFER_RING) - __io_put_kbuf_ring(req, len, nbufs); - else + if (req->flags & REQ_F_BUFFER_RING) { + if (!__io_put_kbuf_ring(req, len, nbufs)) + ret |= IORING_CQE_F_BUF_MORE; + } else { __io_put_kbuf(req, len, issue_flags); + } return ret; }