From patchwork Thu Jul 21 05:16:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C48C2C43334 for ; Thu, 21 Jul 2022 05:16:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229872AbiGUFQn (ORCPT ); Thu, 21 Jul 2022 01:16:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229849AbiGUFQm (ORCPT ); Thu, 21 Jul 2022 01:16:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF84079ECA for ; Wed, 20 Jul 2022 22:16:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=H72h2JvByxb5W1o2dtY465ID2BKO/juYzUUuupKE3ec=; b=A6JJaTW7+mjSDyb+yT3l0FV4A1 d4tgXjH5Fwl3nwvLcbn3DaY4I6XZsQKe3HfWIWtHKt6+Zp+1e6xTitYOrPxJdQ/x29I+6ed8x9h8M SLckSWSUpn/Xr493gQKjRkQn3D3l57ZZzYQ62/lZ6i75V9ok/S5vR4gxYDn6qL1+LGHpmKL3YK/bZ 9R3DYa5pW8FJ9DELg49kFSMlwBpEBhGl0lSA8LujQ1RX2YUjl1SLpvFWSRXIiBMOo7PgPm+kWTKZO jdT7hGIChIy1mHfVYNiH9MZVKurxFWAM/6OAv5AjwLJ7J4QgnBJlL6p7TDLIkEiO8w4tFtB7/ycfE jyJDlt0w==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYB-00HDRr-8t; Thu, 21 Jul 2022 05:16:39 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 1/8] ublk: add a MAINTAINERS entry Date: Thu, 21 Jul 2022 07:16:25 +0200 Message-Id: <20220721051632.1676890-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make get_maintainers.pl work for ublk. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index fe5daf1415013..8a4bbca9f28f4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -20415,6 +20415,13 @@ F: Documentation/filesystems/ubifs-authentication.rst F: Documentation/filesystems/ubifs.rst F: fs/ubifs/ +UBLK USERSPACE BLOCK DRIVER +M: Ming Lei +L: linux-block@vger.kernel.org +S: Maintained +F: drivers/block/ublk_drv.c +F: include/uapi/linux/ublk_cmd.h + UCLINUX (M68KNOMMU AND COLDFIRE) M: Greg Ungerer L: linux-m68k@lists.linux-m68k.org From patchwork Thu Jul 21 05:16:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E65DC43334 for ; Thu, 21 Jul 2022 05:16:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229915AbiGUFQp (ORCPT ); Thu, 21 Jul 2022 01:16:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229441AbiGUFQp (ORCPT ); Thu, 21 Jul 2022 01:16:45 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 772D979ECE for ; Wed, 20 Jul 2022 22:16:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=rAHuZq0/QpMekqaktnMhPEYUBHuKO2r9CpyMgqptQcc=; b=oVkqfdRSCy4Z7AZVaY+C5lo90X Vyz/1cacy6raJrnFrBBEvYCbuRnMbum0iYHxOP2D9WYh+u8wA8HjlXEX55VX4+Oyz8GOY3uVBFmPH fyYGRm/fp846XFzG2tRcJ6vOYxSx1PFME06Dn0pKttStwbdVlx0F5dxReYuAHM33WymKgCBO/5pYI A6RpSzaTi8pzYu4XsSc2OLIhFvEWviKVjOHowC1YCFY6uqIvOv72SRJBYQtdrOSgGoqE4fHLFHq1f Of6NRV75Am4i/Xc8eYQYPxBtlTmEtnfKGeX4Xnd5UJXFHsOFGbIIb8O8wSTzBomhx676RUjVfsuOm ToLEhKrA==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYE-00HDUq-S8; Thu, 21 Jul 2022 05:16:43 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 2/8] ublk: remove UBLK_IO_F_PREFLUSH Date: Thu, 21 Jul 2022 07:16:26 +0200 Message-Id: <20220721051632.1676890-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org REQ_PREFLUSH is turned into REQ_OP_FLUSH by the flush state machine and thus never seen by a blk-mq based driver. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- drivers/block/ublk_drv.c | 3 --- include/uapi/linux/ublk_cmd.h | 1 - 2 files changed, 4 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index b90481b295a74..07913b5bccd90 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -392,9 +392,6 @@ static inline unsigned int ublk_req_build_flags(struct request *req) if (req->cmd_flags & REQ_FUA) flags |= UBLK_IO_F_FUA; - if (req->cmd_flags & REQ_PREFLUSH) - flags |= UBLK_IO_F_PREFLUSH; - if (req->cmd_flags & REQ_NOUNMAP) flags |= UBLK_IO_F_NOUNMAP; diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h index d6879eea2fde0..917580b341984 100644 --- a/include/uapi/linux/ublk_cmd.h +++ b/include/uapi/linux/ublk_cmd.h @@ -107,7 +107,6 @@ struct ublksrv_ctrl_dev_info { #define UBLK_IO_F_FAILFAST_DRIVER (1U << 10) #define UBLK_IO_F_META (1U << 11) #define UBLK_IO_F_FUA (1U << 13) -#define UBLK_IO_F_PREFLUSH (1U << 14) #define UBLK_IO_F_NOUNMAP (1U << 15) #define UBLK_IO_F_SWAP (1U << 16) From patchwork Thu Jul 21 05:16:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924722 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B2D4C433EF for ; Thu, 21 Jul 2022 05:16:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229956AbiGUFQt (ORCPT ); Thu, 21 Jul 2022 01:16:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229950AbiGUFQs (ORCPT ); Thu, 21 Jul 2022 01:16:48 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E080279ECA for ; Wed, 20 Jul 2022 22:16:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=DvETBB5agZp7ygVb4rCORiayjfYhnL2Zion3w5qgXrs=; b=nbVk0ULGKH3faDzm5CPvQ3I0PP 6Y7UUB9dsaCEeGMIc1XfRtd7xfte/TrDH7L8CHL8fOtAbshHhaBV3/F0rUZi7bR9H1EnZqR3+tPfl FgzcN5fL3ZrdecH1+YXxrd+8l5v/VH/QsripNxWcXD1faiggZ4hzMVz6ReyxdTJIVfQdXQ/H96yO9 /zul3SgtPUrKLOoQnWHni9EYnJlQ+Ak/PEoEVctFc9LcTSJ+y1ydN2rBbttpF73n/krEZqBDAmB/K ANBlYIYvE4tZh8QZtHiktpxo1dp/FI3mkg4u1HTjgnnVY+RqGZ7SjypyBxHTsb7ZvZREOkwmPDX3e aX8sHKfQ==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYI-00HDWp-B2; Thu, 21 Jul 2022 05:16:46 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 3/8] ublk: remove the empty open and release block device operations Date: Thu, 21 Jul 2022 07:16:27 +0200 Message-Id: <20220721051632.1676890-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org No need to define empty versions, they can just be left out. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- drivers/block/ublk_drv.c | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 07913b5bccd90..deabcb23ae2af 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -208,19 +208,8 @@ static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id) PAGE_SIZE); } -static int ublk_open(struct block_device *bdev, fmode_t mode) -{ - return 0; -} - -static void ublk_release(struct gendisk *disk, fmode_t mode) -{ -} - static const struct block_device_operations ub_fops = { .owner = THIS_MODULE, - .open = ublk_open, - .release = ublk_release, }; #define UBLK_MAX_PIN_PAGES 32 From patchwork Thu Jul 21 05:16:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A678C433EF for ; Thu, 21 Jul 2022 05:16:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229950AbiGUFQw (ORCPT ); Thu, 21 Jul 2022 01:16:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229441AbiGUFQw (ORCPT ); Thu, 21 Jul 2022 01:16:52 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80C3479ED1 for ; Wed, 20 Jul 2022 22:16:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=p3Kp8ytO8RxIl5ARh+MIw1lb0SipwSEVu685ZQVathA=; b=A5SMgryannzoUmJYdZgZ7/0usX 0NBMRJQrK0bAx4Dt9RamjpLFEaZzPSSnN3RcVXP+HUJiBnlEuyfOk38QMWvMF4NCbQb5+zYrjJfpK JV7CW6XRY3KbxKRPd/daN6TgONndX2QDf9NOBBlfKuXrGTl+/y02WhVfVdQNjHfQvASdUHydBWr55 bMyfVyZdSzyFJaD52SuOY7EiXSGm7V8pHUjYNNoK/eILqkrkZUUukSA8bTLO3PktK4MQHm+//7qrA 8Jk+iUHHTslnZw+6dmwaJXNWlKe6DQNmiHHKnDiXPlC8WVyf2xtZfk/idZMgTDk1DPX42g0EXSyde vA7dcY5Q==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYL-00HDaL-UK; Thu, 21 Jul 2022 05:16:50 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 4/8] ublk: simplify ublk_ch_open and ublk_ch_release Date: Thu, 21 Jul 2022 07:16:28 +0200 Message-Id: <20220721051632.1676890-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org fops->open and fops->release are always paired. Use simple atomic bit ops ot indicate if the device is opened instead of a count that can only be 0 and 1 and a useless cmpxchg loop in ublk_ch_release. Also don't bother clearing file->private_data is the file is about to be freed anyway. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- drivers/block/ublk_drv.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index deabcb23ae2af..1f7bbbc3276a2 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -125,7 +125,8 @@ struct ublk_device { struct cdev cdev; struct device cdev_dev; - atomic_t ch_open_cnt; +#define UB_STATE_OPEN (1 << 0) + unsigned long state; int ub_number; struct mutex mutex; @@ -647,21 +648,17 @@ static int ublk_ch_open(struct inode *inode, struct file *filp) struct ublk_device *ub = container_of(inode->i_cdev, struct ublk_device, cdev); - if (atomic_cmpxchg(&ub->ch_open_cnt, 0, 1) == 0) { - filp->private_data = ub; - return 0; - } - return -EBUSY; + if (test_and_set_bit(UB_STATE_OPEN, &ub->state)) + return -EBUSY; + filp->private_data = ub; + return 0; } static int ublk_ch_release(struct inode *inode, struct file *filp) { struct ublk_device *ub = filp->private_data; - while (atomic_cmpxchg(&ub->ch_open_cnt, 1, 0) != 1) - cpu_relax(); - - filp->private_data = NULL; + clear_bit(UB_STATE_OPEN, &ub->state); return 0; } From patchwork Thu Jul 21 05:16:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFFD2C43334 for ; Thu, 21 Jul 2022 05:16:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229441AbiGUFQ6 (ORCPT ); Thu, 21 Jul 2022 01:16:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229962AbiGUFQ4 (ORCPT ); Thu, 21 Jul 2022 01:16:56 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C783D79ECE for ; Wed, 20 Jul 2022 22:16:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=GtqeGaiuEThwQGpdoGUXPWyEOCsMJfLU6D9UAX3Y39s=; b=xzX9f2N/WGAKsC7QPKPN1u27ls oJVItTQ2ICYqSuBmlhiBCS3CietSdD70EV0/ackznTA60YhNwgyKAfuWgNBbhM6HjZSRTOYqneZi1 9zD8AlE82O3N6gousNCYpQYrj5uiozrweTKUqssVeBnBAW/G7pWLtT4lulYdDKP4Nz3cz9iuGG3T1 8Fr0obiL1WkxASUNSQAaxc1Xg433iSlLPX2FgyZz9i5dNfECiwO3TSdFK/FC2ANLobToZuWqAdJNG W7FNvJxtWwo7AfBBWPB+vxxq0FcfyybyHCJFBMOutgnmFFqqjHbzhueEkyO/hdSNcjRaF5cvdEIuP SACDggPA==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYQ-00HDeV-BI; Thu, 21 Jul 2022 05:16:54 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 5/8] ublk: cleanup ublk_ctrl_uring_cmd Date: Thu, 21 Jul 2022 07:16:29 +0200 Message-Id: <20220721051632.1676890-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move all per-command work into the per-command ublk_ctrl_* helpers instead of being split over those, ublk_ctrl_cmd_validate, and the main ublk_ctrl_uring_cmd handler. To facilitate that, the old ublk_ctrl_stop_dev function that just contained two function calls is folded into both callers. Signed-off-by: Christoph Hellwig Reviewed-by: ZiyangZhang Reviewed-by: Ming Lei --- drivers/block/ublk_drv.c | 234 +++++++++++++++++++-------------------- 1 file changed, 116 insertions(+), 118 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 1f7bbbc3276a2..af70c18796e70 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -813,13 +813,6 @@ static void ublk_stop_dev(struct ublk_device *ub) cancel_delayed_work_sync(&ub->monitor_work); } -static int ublk_ctrl_stop_dev(struct ublk_device *ub) -{ - ublk_stop_dev(ub); - cancel_work_sync(&ub->stop_work); - return 0; -} - static inline bool ublk_queue_ready(struct ublk_queue *ubq) { return ubq->nr_io_ready == ubq->q_depth; @@ -1205,8 +1198,8 @@ static int ublk_add_dev(struct ublk_device *ub) static void ublk_remove(struct ublk_device *ub) { - ublk_ctrl_stop_dev(ub); - + ublk_stop_dev(ub); + cancel_work_sync(&ub->stop_work); cdev_device_del(&ub->cdev, &ub->cdev_dev); put_device(&ub->cdev_dev); } @@ -1227,36 +1220,45 @@ static struct ublk_device *ublk_get_device_from_id(int idx) return ub; } -static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd) +static int ublk_ctrl_start_dev(struct io_uring_cmd *cmd) { struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; - int ret = -EINVAL; int ublksrv_pid = (int)header->data[0]; unsigned long dev_blocks = header->data[1]; + struct ublk_device *ub; + int ret = -EINVAL; if (ublksrv_pid <= 0) - return ret; + return -EINVAL; + + ub = ublk_get_device_from_id(header->dev_id); + if (!ub) + return -EINVAL; wait_for_completion_interruptible(&ub->completion); schedule_delayed_work(&ub->monitor_work, UBLK_DAEMON_MONITOR_PERIOD); mutex_lock(&ub->mutex); - if (!disk_live(ub->ub_disk)) { - /* We may get disk size updated */ - if (dev_blocks) { - ub->dev_info.dev_blocks = dev_blocks; - ublk_update_capacity(ub); - } - ub->dev_info.ublksrv_pid = ublksrv_pid; - ret = add_disk(ub->ub_disk); - if (!ret) - ub->dev_info.state = UBLK_S_DEV_LIVE; - } else { + if (disk_live(ub->ub_disk)) { ret = -EEXIST; + goto out_unlock; } - mutex_unlock(&ub->mutex); + /* We may get disk size updated */ + if (dev_blocks) { + ub->dev_info.dev_blocks = dev_blocks; + ublk_update_capacity(ub); + } + ub->dev_info.ublksrv_pid = ublksrv_pid; + ret = add_disk(ub->ub_disk); + if (ret) + goto out_unlock; + + ub->dev_info.state = UBLK_S_DEV_LIVE; +out_unlock: + mutex_unlock(&ub->mutex); + ublk_put_device(ub); return ret; } @@ -1281,6 +1283,13 @@ static int ublk_ctrl_get_queue_affinity(struct io_uring_cmd *cmd) unsigned long queue; unsigned int retlen; int ret = -EINVAL; + + if (header->len * BITS_PER_BYTE < nr_cpu_ids) + return -EINVAL; + if (header->len & (sizeof(unsigned long)-1)) + return -EINVAL; + if (!header->addr) + return -EINVAL; ub = ublk_get_device_from_id(header->dev_id); if (!ub) @@ -1311,38 +1320,64 @@ static int ublk_ctrl_get_queue_affinity(struct io_uring_cmd *cmd) return ret; } -static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_dev_info *info, - void __user *argp, int idx) +static inline void ublk_dump_dev_info(struct ublksrv_ctrl_dev_info *info) { + pr_devel("%s: dev id %d flags %llx\n", __func__, + info->dev_id, info->flags[0]); + pr_devel("\t nr_hw_queues %d queue_depth %d block size %d dev_capacity %lld\n", + info->nr_hw_queues, info->queue_depth, + info->block_size, info->dev_blocks); +} + +static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd) +{ + struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; + void __user *argp = (void __user *)(unsigned long)header->addr; + struct ublksrv_ctrl_dev_info info; struct ublk_device *ub; - int ret; + int ret = -EINVAL; + + if (header->len < sizeof(info) || !header->addr) + return -EINVAL; + if (header->queue_id != (u16)-1) { + pr_warn("%s: queue_id is wrong %x\n", + __func__, header->queue_id); + return -EINVAL; + } + if (copy_from_user(&info, argp, sizeof(info))) + return -EFAULT; + ublk_dump_dev_info(&ub->dev_info); + if (header->dev_id != info.dev_id) { + pr_warn("%s: dev id not match %u %u\n", + __func__, header->dev_id, info.dev_id); + return -EINVAL; + } ret = mutex_lock_killable(&ublk_ctl_mutex); if (ret) return ret; - ub = __ublk_create_dev(idx); - if (!IS_ERR_OR_NULL(ub)) { - memcpy(&ub->dev_info, info, sizeof(*info)); + ub = __ublk_create_dev(header->dev_id); + if (IS_ERR(ub)) { + ret = PTR_ERR(ub); + goto out_unlock; + } - /* update device id */ - ub->dev_info.dev_id = ub->ub_number; + memcpy(&ub->dev_info, &info, sizeof(info)); - ret = ublk_add_dev(ub); - if (!ret) { - if (copy_to_user(argp, &ub->dev_info, sizeof(*info))) { - ublk_remove(ub); - ret = -EFAULT; - } - } - } else { - if (IS_ERR(ub)) - ret = PTR_ERR(ub); - else - ret = -ENOMEM; + /* update device id */ + ub->dev_info.dev_id = ub->ub_number; + + ret = ublk_add_dev(ub); + if (ret) + goto out_unlock; + + if (copy_to_user(argp, &ub->dev_info, sizeof(info))) { + ublk_remove(ub); + ret = -EFAULT; } +out_unlock: mutex_unlock(&ublk_ctl_mutex); - return ret; } @@ -1386,16 +1421,6 @@ static int ublk_ctrl_del_dev(int idx) return ret; } - -static inline void ublk_dump_dev_info(struct ublksrv_ctrl_dev_info *info) -{ - pr_devel("%s: dev id %d flags %llx\n", __func__, - info->dev_id, info->flags[0]); - pr_devel("\t nr_hw_queues %d queue_depth %d block size %d dev_capacity %lld\n", - info->nr_hw_queues, info->queue_depth, - info->block_size, info->dev_blocks); -} - static inline void ublk_ctrl_cmd_dump(struct io_uring_cmd *cmd) { struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; @@ -1405,59 +1430,47 @@ static inline void ublk_ctrl_cmd_dump(struct io_uring_cmd *cmd) header->data[0], header->addr, header->len); } -static int ublk_ctrl_cmd_validate(struct io_uring_cmd *cmd, - struct ublksrv_ctrl_dev_info *info) +static int ublk_ctrl_stop_dev(struct io_uring_cmd *cmd) { struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; - u32 cmd_op = cmd->cmd_op; - void __user *argp = (void __user *)(unsigned long)header->addr; + struct ublk_device *ub; - if (!capable(CAP_SYS_ADMIN)) - return -EPERM; + ub = ublk_get_device_from_id(header->dev_id); + if (!ub) + return -EINVAL; - switch (cmd_op) { - case UBLK_CMD_GET_DEV_INFO: - if (header->len < sizeof(*info) || !header->addr) - return -EINVAL; - break; - case UBLK_CMD_ADD_DEV: - if (header->len < sizeof(*info) || !header->addr) - return -EINVAL; - if (copy_from_user(info, argp, sizeof(*info)) != 0) - return -EFAULT; - ublk_dump_dev_info(info); - if (header->dev_id != info->dev_id) { - printk(KERN_WARNING "%s: cmd %x, dev id not match %u %u\n", - __func__, cmd_op, header->dev_id, - info->dev_id); - return -EINVAL; - } - if (header->queue_id != (u16)-1) { - printk(KERN_WARNING "%s: cmd %x queue_id is wrong %x\n", - __func__, cmd_op, header->queue_id); - return -EINVAL; - } - break; - case UBLK_CMD_GET_QUEUE_AFFINITY: - if ((header->len * BITS_PER_BYTE) < nr_cpu_ids) - return -EINVAL; - if (header->len & (sizeof(unsigned long)-1)) - return -EINVAL; - if (!header->addr) - return -EINVAL; - } + ublk_stop_dev(ub); + cancel_work_sync(&ub->stop_work); + ublk_put_device(ub); return 0; } -static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static int ublk_ctrl_get_dev_info(struct io_uring_cmd *cmd) { struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; void __user *argp = (void __user *)(unsigned long)header->addr; - struct ublksrv_ctrl_dev_info info; - u32 cmd_op = cmd->cmd_op; struct ublk_device *ub; + int ret = 0; + + if (header->len < sizeof(struct ublksrv_ctrl_dev_info) || !header->addr) + return -EINVAL; + + ub = ublk_get_device_from_id(header->dev_id); + if (!ub) + return -EINVAL; + + if (copy_to_user(argp, &ub->dev_info, sizeof(ub->dev_info))) + ret = -EFAULT; + ublk_put_device(ub); + + return ret; +} + +static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd, + unsigned int issue_flags) +{ + struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; int ret = -EINVAL; ublk_ctrl_cmd_dump(cmd); @@ -1465,38 +1478,23 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd, if (!(issue_flags & IO_URING_F_SQE128)) goto out; - ret = ublk_ctrl_cmd_validate(cmd, &info); - if (ret) + ret = -EPERM; + if (!capable(CAP_SYS_ADMIN)) goto out; ret = -ENODEV; - switch (cmd_op) { + switch (cmd->cmd_op) { case UBLK_CMD_START_DEV: - ub = ublk_get_device_from_id(header->dev_id); - if (ub) { - ret = ublk_ctrl_start_dev(ub, cmd); - ublk_put_device(ub); - } + ret = ublk_ctrl_start_dev(cmd); break; case UBLK_CMD_STOP_DEV: - ub = ublk_get_device_from_id(header->dev_id); - if (ub) { - ret = ublk_ctrl_stop_dev(ub); - ublk_put_device(ub); - } + ret = ublk_ctrl_stop_dev(cmd); break; case UBLK_CMD_GET_DEV_INFO: - ub = ublk_get_device_from_id(header->dev_id); - if (ub) { - if (copy_to_user(argp, &ub->dev_info, sizeof(info))) - ret = -EFAULT; - else - ret = 0; - ublk_put_device(ub); - } + ret = ublk_ctrl_get_dev_info(cmd); break; case UBLK_CMD_ADD_DEV: - ret = ublk_ctrl_add_dev(&info, argp, header->dev_id); + ret = ublk_ctrl_add_dev(cmd); break; case UBLK_CMD_DEL_DEV: ret = ublk_ctrl_del_dev(header->dev_id); From patchwork Thu Jul 21 05:16:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924724 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA5E1C433EF for ; Thu, 21 Jul 2022 05:17:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229962AbiGUFRA (ORCPT ); Thu, 21 Jul 2022 01:17:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229966AbiGUFQ7 (ORCPT ); Thu, 21 Jul 2022 01:16:59 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2387B79EC9 for ; Wed, 20 Jul 2022 22:16:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=eHUS2IgbSTkLqGQiCc5TY9gZT9uvQe/APN891Ex412U=; b=GSQv4HxiVPqzl9rpTtqUAaRdt1 rFYgNuCthreY6wvyzPl0gxQKJrqS7SdYPEg9V+dmZJltcix3ViNDAwfLxgzmuWiu0zFtgfKADlW8G RiEzTnXe9vXzFgAOZuqcSfZR+XBctgRwalXVs9DmoNM1yAt477llsSOedGYQghpNKyQELu9KpdKDT cncZ/WTemZCDUO92jNl/krbKHGBtKsplecJm+0n0UJ5BuHdZTUTlY4y2W4IM/KluN/MCb0DejWFpn t718zxIwYiNTaHPto1b+l3sJ9mOQ77sC/YUBdmncE1HkbahNIHMuoU4foWbOp9RSYbMuOVl63UcpA XXVFXnSw==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYT-00HDkN-K0; Thu, 21 Jul 2022 05:16:58 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 6/8] ublk: fold __ublk_create_dev into ublk_ctrl_add_dev Date: Thu, 21 Jul 2022 07:16:30 +0200 Message-Id: <20220721051632.1676890-7-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fold __ublk_create_dev into its only caller to avoid the packing and unpacking of the return value into an ERR_PTR. Signed-off-by: Christoph Hellwig Reviewed-by: ZiyangZhang Reviewed-by: Ming Lei --- drivers/block/ublk_drv.c | 28 ++++++++-------------------- 1 file changed, 8 insertions(+), 20 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index af70c18796e70..7d57cbecfc8a0 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1017,23 +1017,6 @@ static int __ublk_alloc_dev_number(struct ublk_device *ub, int idx) return err; } -static struct ublk_device *__ublk_create_dev(int idx) -{ - struct ublk_device *ub = NULL; - int ret; - - ub = kzalloc(sizeof(*ub), GFP_KERNEL); - if (!ub) - return ERR_PTR(-ENOMEM); - - ret = __ublk_alloc_dev_number(ub, idx); - if (ret < 0) { - kfree(ub); - return ERR_PTR(ret); - } - return ub; -} - static void __ublk_destroy_dev(struct ublk_device *ub) { spin_lock(&ublk_idr_lock); @@ -1357,9 +1340,14 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd) if (ret) return ret; - ub = __ublk_create_dev(header->dev_id); - if (IS_ERR(ub)) { - ret = PTR_ERR(ub); + ret = -ENOMEM; + ub = kzalloc(sizeof(*ub), GFP_KERNEL); + if (!ub) + goto out_unlock; + + ret = __ublk_alloc_dev_number(ub, header->dev_id); + if (ret < 0) { + kfree(ub); goto out_unlock; } From patchwork Thu Jul 21 05:16:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 379F7C43334 for ; Thu, 21 Jul 2022 05:17:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229984AbiGUFRD (ORCPT ); Thu, 21 Jul 2022 01:17:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229958AbiGUFRD (ORCPT ); Thu, 21 Jul 2022 01:17:03 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BDE779ECE for ; Wed, 20 Jul 2022 22:17:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=jm0roFuUacizElAlpwI3+r1F+KLQVTFqM3e7Zs/Y4IA=; b=xbVN1kNsdgZsW3ZrvIvQ9mL8sd hZhaP3zTJc/5RryOJiSpzZRePWx545UT8L9g8jv/+EufwZlDe9w16hJfvHw+aRXGR6AePDmwxV8Tq N2xHGHnPAbm+jCdEaV1tnvQJoai825iLi8LzcftwezrH8LocbhBG/shmn4Uk3KfW8A1bP+gRRHKMX wdNeFR4HgKv0mNnfqyse1P3Z2rE+qtWDoKGNjmHV4fb2Pm/4N7UchcsWXwuG883vIHa2VOSZNnPmJ VhRsjBOPbCfp9HjvdncJDgVOyM0xh7kLEte/BUAUpG4fohEqlm0xS1MB08pQa+qIpaLaEgk8hFbO+ eUmeO75Q==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYX-00HDnR-2x; Thu, 21 Jul 2022 05:17:01 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 7/8] ublk: rewrite ublk_ctrl_get_queue_affinity to not rely on hctx->cpumask Date: Thu, 21 Jul 2022 07:16:31 +0200 Message-Id: <20220721051632.1676890-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Looking at the hctxs and cpumap is not safe without at very last a RCU reference. It also requires the queue to be set up before starting the device, which leads to rather awkware life time rules. Instead rewrite ublk_ctrl_get_queue_affinity to call blk_mq_map_queues on an on-stack blk_mq_queue_map and build the cpumask from that. Note: given that ublk has not made it into a released kernel it might make sense to change the ABI for this command to instead copy the qmap.mq_map array (a nr_cpu_ids sized array of unsigned integer values where the CPU index directly points to the queue) to userspace. That way all the information could be transported to userspace in a single command. Signed-off-by: Christoph Hellwig --- drivers/block/ublk_drv.c | 63 ++++++++++++++++++++-------------------- 1 file changed, 32 insertions(+), 31 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 7d57cbecfc8a0..67c6c46b8e07e 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1245,26 +1245,16 @@ static int ublk_ctrl_start_dev(struct io_uring_cmd *cmd) return ret; } -static struct blk_mq_hw_ctx *ublk_get_hw_queue(struct ublk_device *ub, - unsigned int index) -{ - struct blk_mq_hw_ctx *hctx; - unsigned long i; - - queue_for_each_hw_ctx(ub->ub_queue, hctx, i) - if (hctx->queue_num == index) - return hctx; - return NULL; -} - static int ublk_ctrl_get_queue_affinity(struct io_uring_cmd *cmd) { struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd; void __user *argp = (void __user *)(unsigned long)header->addr; - struct blk_mq_hw_ctx *hctx; + struct blk_mq_queue_map qmap = {}; + cpumask_var_t cpumask; struct ublk_device *ub; unsigned long queue; unsigned int retlen; + unsigned int i; int ret = -EINVAL; if (header->len * BITS_PER_BYTE < nr_cpu_ids) @@ -1276,30 +1266,41 @@ static int ublk_ctrl_get_queue_affinity(struct io_uring_cmd *cmd) ub = ublk_get_device_from_id(header->dev_id); if (!ub) - goto out; + return -EINVAL; queue = header->data[0]; if (queue >= ub->dev_info.nr_hw_queues) - goto out; - hctx = ublk_get_hw_queue(ub, queue); - if (!hctx) - goto out; + goto out_put_device; + + qmap.mq_map = kcalloc(nr_cpu_ids, sizeof(qmap.mq_map), GFP_KERNEL); + if (!qmap.mq_map) + goto out_put_device; + qmap.nr_queues = ub->dev_info.nr_hw_queues; + blk_mq_map_queues(&qmap); + + ret = -ENOMEM; + if (!zalloc_cpumask_var(&cpumask, GFP_KERNEL)) + goto out_free_qmap; + + for_each_possible_cpu(i) + if (qmap.mq_map[i] == queue) + cpumask_set_cpu(i, cpumask); + ret = -EFAULT; retlen = min_t(unsigned short, header->len, cpumask_size()); - if (copy_to_user(argp, hctx->cpumask, retlen)) { - ret = -EFAULT; - goto out; - } - if (retlen != header->len) { - if (clear_user(argp + retlen, header->len - retlen)) { - ret = -EFAULT; - goto out; - } - } + if (copy_to_user(argp, cpumask, retlen)) + goto out_put_device; + + if (retlen != header->len && + clear_user(argp + retlen, header->len - retlen)) + goto out_put_device; + ret = 0; - out: - if (ub) - ublk_put_device(ub); + free_cpumask_var(cpumask); +out_free_qmap: + kfree(qmap.mq_map); +out_put_device: + ublk_put_device(ub); return ret; } From patchwork Thu Jul 21 05:16:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12924727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CAD0C43334 for ; Thu, 21 Jul 2022 05:17:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229958AbiGUFRI (ORCPT ); Thu, 21 Jul 2022 01:17:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229989AbiGUFRH (ORCPT ); Thu, 21 Jul 2022 01:17:07 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7685679ED6 for ; Wed, 20 Jul 2022 22:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=SomBu79KNq6WBBnExFq3YoTTYpoiFj/+uzfMfu6y9tY=; b=LIUsRZy2gHeNb+7iDWMpbWlpmh yEQ1HBUbpuCw/yr7xl0Sp/FXUaTs3yL5l++K4p1zOJBKBhwW6JRn/gbmp1V/DvlxZwJ9fseGeusT6 X0dXOvGc7lCSI2rzD+anYS1l4skrxTV8FvnyyMd7aprpGgUEr5oW/OFeF0Pcfg9inFwSNOk4U9Mzn oaLeKNtaAx/ITGUMoJNlMb9Dfohzx5LsyOemxCTJ01P+fAGB3d5zrw4ughSlgY2GvxbZKWV6JJzVw XqCp3IvvII04/POuNs/sXg62WDtKiv3FocAX2KyKb3ys4vFlGsBv7mZ1uo4XkP+e7qC1pjBrciUe9 WaTrErdw==; Received: from 089144198117.atnat0007.highway.a1.net ([89.144.198.117] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEOYa-00HDnt-In; Thu, 21 Jul 2022 05:17:05 +0000 From: Christoph Hellwig To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org Subject: [PATCH 8/8] ublk: defer disk allocation Date: Thu, 21 Jul 2022 07:16:32 +0200 Message-Id: <20220721051632.1676890-9-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220721051632.1676890-1-hch@lst.de> References: <20220721051632.1676890-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Defer allocating the gendisk and request_queue until UBLK_CMD_START_DEV is called. This avoids funky life times where a disk is allocated and then can be added and removed multiple times, which has never been supported by the block layer. Signed-off-by: Christoph Hellwig --- drivers/block/ublk_drv.c | 202 ++++++++++++++++----------------------- 1 file changed, 85 insertions(+), 117 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 67c6c46b8e07e..85c3277ade372 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -112,7 +112,6 @@ struct ublk_queue { struct ublk_device { struct gendisk *ub_disk; - struct request_queue *ub_queue; char *__queues; @@ -126,6 +125,7 @@ struct ublk_device { struct device cdev_dev; #define UB_STATE_OPEN (1 << 0) +#define UB_STATE_USED (1 << 1) unsigned long state; int ub_number; @@ -156,8 +156,6 @@ static DEFINE_MUTEX(ublk_ctl_mutex); static struct miscdevice ublk_misc; -static struct lock_class_key ublk_bio_compl_lkclass; - static inline bool ublk_can_use_task_work(const struct ublk_queue *ubq) { if (IS_BUILTIN(CONFIG_BLK_DEV_UBLK) && @@ -209,8 +207,17 @@ static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id) PAGE_SIZE); } +static void ublk_free_disk(struct gendisk *disk) +{ + struct ublk_device *ub = disk->private_data; + + clear_bit(UB_STATE_USED, &ub->state); + put_device(&ub->cdev_dev); +} + static const struct block_device_operations ub_fops = { .owner = THIS_MODULE, + .free_disk = ublk_free_disk, }; #define UBLK_MAX_PIN_PAGES 32 @@ -801,13 +808,16 @@ static void ublk_cancel_dev(struct ublk_device *ub) static void ublk_stop_dev(struct ublk_device *ub) { mutex_lock(&ub->mutex); - if (!disk_live(ub->ub_disk)) + if (ub->dev_info.state != UBLK_S_DEV_LIVE) goto unlock; del_gendisk(ub->ub_disk); ub->dev_info.state = UBLK_S_DEV_DEAD; ub->dev_info.ublksrv_pid = -1; ublk_cancel_dev(ub); + blk_mq_free_tag_set(&ub->tag_set); + put_disk(ub->ub_disk); + ub->ub_disk = NULL; unlock: mutex_unlock(&ub->mutex); cancel_delayed_work_sync(&ub->monitor_work); @@ -1033,12 +1043,6 @@ static void ublk_cdev_rel(struct device *dev) { struct ublk_device *ub = container_of(dev, struct ublk_device, cdev_dev); - blk_mq_destroy_queue(ub->ub_queue); - - put_disk(ub->ub_disk); - - blk_mq_free_tag_set(&ub->tag_set); - ublk_deinit_queues(ub); __ublk_destroy_dev(ub); @@ -1078,107 +1082,6 @@ static void ublk_stop_work_fn(struct work_struct *work) ublk_stop_dev(ub); } -static void ublk_update_capacity(struct ublk_device *ub) -{ - unsigned int max_rq_bytes; - - /* make max request buffer size aligned with PAGE_SIZE */ - max_rq_bytes = round_down(ub->dev_info.rq_max_blocks << - ub->bs_shift, PAGE_SIZE); - ub->dev_info.rq_max_blocks = max_rq_bytes >> ub->bs_shift; - - set_capacity(ub->ub_disk, ub->dev_info.dev_blocks << (ub->bs_shift - 9)); -} - -/* add disk & cdev, cleanup everything in case of failure */ -static int ublk_add_dev(struct ublk_device *ub) -{ - struct gendisk *disk; - int err = -ENOMEM; - int bsize; - - /* We are not ready to support zero copy */ - ub->dev_info.flags[0] &= ~UBLK_F_SUPPORT_ZERO_COPY; - - bsize = ub->dev_info.block_size; - ub->bs_shift = ilog2(bsize); - - ub->dev_info.nr_hw_queues = min_t(unsigned int, - ub->dev_info.nr_hw_queues, nr_cpu_ids); - - INIT_WORK(&ub->stop_work, ublk_stop_work_fn); - INIT_DELAYED_WORK(&ub->monitor_work, ublk_daemon_monitor_work); - - if (ublk_init_queues(ub)) - goto out_destroy_dev; - - ub->tag_set.ops = &ublk_mq_ops; - ub->tag_set.nr_hw_queues = ub->dev_info.nr_hw_queues; - ub->tag_set.queue_depth = ub->dev_info.queue_depth; - ub->tag_set.numa_node = NUMA_NO_NODE; - ub->tag_set.cmd_size = sizeof(struct ublk_rq_data); - ub->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; - ub->tag_set.driver_data = ub; - - err = blk_mq_alloc_tag_set(&ub->tag_set); - if (err) - goto out_deinit_queues; - - ub->ub_queue = blk_mq_init_queue(&ub->tag_set); - if (IS_ERR(ub->ub_queue)) { - err = PTR_ERR(ub->ub_queue); - goto out_cleanup_tags; - } - ub->ub_queue->queuedata = ub; - - disk = ub->ub_disk = blk_mq_alloc_disk_for_queue(ub->ub_queue, - &ublk_bio_compl_lkclass); - if (!disk) { - err = -ENOMEM; - goto out_free_request_queue; - } - - blk_queue_logical_block_size(ub->ub_queue, bsize); - blk_queue_physical_block_size(ub->ub_queue, bsize); - blk_queue_io_min(ub->ub_queue, bsize); - - blk_queue_max_hw_sectors(ub->ub_queue, ub->dev_info.rq_max_blocks << - (ub->bs_shift - 9)); - - ub->ub_queue->limits.discard_granularity = PAGE_SIZE; - - blk_queue_max_discard_sectors(ub->ub_queue, UINT_MAX >> 9); - blk_queue_max_write_zeroes_sectors(ub->ub_queue, UINT_MAX >> 9); - - ublk_update_capacity(ub); - - disk->fops = &ub_fops; - disk->private_data = ub; - disk->queue = ub->ub_queue; - sprintf(disk->disk_name, "ublkb%d", ub->ub_number); - - mutex_init(&ub->mutex); - - /* add char dev so that ublksrv daemon can be setup */ - err = ublk_add_chdev(ub); - if (err) - return err; - - /* don't expose disk now until we got start command from cdev */ - - return 0; - -out_free_request_queue: - blk_mq_destroy_queue(ub->ub_queue); -out_cleanup_tags: - blk_mq_free_tag_set(&ub->tag_set); -out_deinit_queues: - ublk_deinit_queues(ub); -out_destroy_dev: - __ublk_destroy_dev(ub); - return err; -} - static void ublk_remove(struct ublk_device *ub) { ublk_stop_dev(ub); @@ -1209,6 +1112,7 @@ static int ublk_ctrl_start_dev(struct io_uring_cmd *cmd) int ublksrv_pid = (int)header->data[0]; unsigned long dev_blocks = header->data[1]; struct ublk_device *ub; + struct gendisk *disk; int ret = -EINVAL; if (ublksrv_pid <= 0) @@ -1223,22 +1127,62 @@ static int ublk_ctrl_start_dev(struct io_uring_cmd *cmd) schedule_delayed_work(&ub->monitor_work, UBLK_DAEMON_MONITOR_PERIOD); mutex_lock(&ub->mutex); - if (disk_live(ub->ub_disk)) { + if (ub->dev_info.state == UBLK_S_DEV_LIVE || + test_bit(UB_STATE_USED, &ub->state)) { ret = -EEXIST; goto out_unlock; } /* We may get disk size updated */ - if (dev_blocks) { + if (dev_blocks) ub->dev_info.dev_blocks = dev_blocks; - ublk_update_capacity(ub); + + ub->tag_set.ops = &ublk_mq_ops; + ub->tag_set.nr_hw_queues = ub->dev_info.nr_hw_queues; + ub->tag_set.queue_depth = ub->dev_info.queue_depth; + ub->tag_set.numa_node = NUMA_NO_NODE; + ub->tag_set.cmd_size = sizeof(struct ublk_rq_data); + ub->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; + ub->tag_set.driver_data = ub; + ret = blk_mq_alloc_tag_set(&ub->tag_set); + if (ret) + goto out_unlock; + + disk = blk_mq_alloc_disk(&ub->tag_set, ub); + if (IS_ERR(disk)) { + ret = PTR_ERR(disk); + goto out_free_tag_set; } + sprintf(disk->disk_name, "ublkb%d", ub->ub_number); + disk->fops = &ub_fops; + disk->private_data = ub; + + blk_queue_logical_block_size(disk->queue, ub->dev_info.block_size); + blk_queue_physical_block_size(disk->queue, ub->dev_info.block_size); + blk_queue_io_min(disk->queue, ub->dev_info.block_size); + blk_queue_max_hw_sectors(disk->queue, + ub->dev_info.rq_max_blocks << (ub->bs_shift - 9)); + disk->queue->limits.discard_granularity = PAGE_SIZE; + blk_queue_max_discard_sectors(disk->queue, UINT_MAX >> 9); + blk_queue_max_write_zeroes_sectors(disk->queue, UINT_MAX >> 9); + + set_capacity(disk, ub->dev_info.dev_blocks << (ub->bs_shift - 9)); + ub->dev_info.ublksrv_pid = ublksrv_pid; - ret = add_disk(ub->ub_disk); + ub->ub_disk = disk; + get_device(&ub->cdev_dev); + ret = add_disk(disk); if (ret) - goto out_unlock; + goto out_put_disk; + set_bit(UB_STATE_USED, &ub->state); ub->dev_info.state = UBLK_S_DEV_LIVE; + goto out_unlock; + +out_put_disk: + put_disk(disk); +out_free_tag_set: + blk_mq_free_tag_set(&ub->tag_set); out_unlock: mutex_unlock(&ub->mutex); ublk_put_device(ub); @@ -1319,6 +1263,7 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd) void __user *argp = (void __user *)(unsigned long)header->addr; struct ublksrv_ctrl_dev_info info; struct ublk_device *ub; + unsigned int max_rq_bytes; int ret = -EINVAL; if (header->len < sizeof(info) || !header->addr) @@ -1352,12 +1297,35 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd) goto out_unlock; } + mutex_init(&ub->mutex); + INIT_WORK(&ub->stop_work, ublk_stop_work_fn); + INIT_DELAYED_WORK(&ub->monitor_work, ublk_daemon_monitor_work); + memcpy(&ub->dev_info, &info, sizeof(info)); /* update device id */ ub->dev_info.dev_id = ub->ub_number; - ret = ublk_add_dev(ub); + /* We are not ready to support zero copy */ + ub->dev_info.flags[0] &= ~UBLK_F_SUPPORT_ZERO_COPY; + + ub->dev_info.nr_hw_queues = min_t(unsigned int, + ub->dev_info.nr_hw_queues, nr_cpu_ids); + ub->bs_shift = ilog2(ub->dev_info.block_size); + + /* make max request buffer size aligned with PAGE_SIZE */ + max_rq_bytes = ub->dev_info.rq_max_blocks << ub->bs_shift; + ub->dev_info.rq_max_blocks = + round_down(max_rq_bytes, PAGE_SIZE) >> ub->bs_shift; + + ret = ublk_init_queues(ub); + if (ret) { + __ublk_destroy_dev(ub); + goto out_unlock; + } + + /* add char dev so that ublksrv daemon can be setup */ + ret = ublk_add_chdev(ub); if (ret) goto out_unlock;