From patchwork Thu Jun 25 09:11:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 6672801 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3800E9F399 for ; Thu, 25 Jun 2015 09:11:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5A2B0205B6 for ; Thu, 25 Jun 2015 09:11:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 68A7120595 for ; Thu, 25 Jun 2015 09:11:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752043AbbFYJLt (ORCPT ); Thu, 25 Jun 2015 05:11:49 -0400 Received: from mail-la0-f46.google.com ([209.85.215.46]:32989 "EHLO mail-la0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751743AbbFYJLn (ORCPT ); Thu, 25 Jun 2015 05:11:43 -0400 Received: by laka10 with SMTP id a10so41051629lak.0 for ; Thu, 25 Jun 2015 02:11:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:subject:date:message-id:in-reply-to:references; bh=avhWkYWjB+SYTELwykrpmLRqjCv4S3c7z/n0dSvbznY=; b=OHjXochVlPIHqy8UC2MoUOAz2SGQAr0oMJZjcNJmkjhGCvsAne3awsKQTGdDIw3+We ISfh5qtmxOk8cB6pnUYPVdfnj4SriBhTirz+Vm0K2u4NttfOKrV1sYhhmkFMDL+zTpbA 0b/ZUBLZUKvvmRGKUPz2h+equb5o1GVP3GOFIbSA1Z+6QFV7TmO2mbCIoma++XgPEyda 7XVeqDhTIV6GyiDA70vmng3kCrQw2A2/MUUjao7JM/oGaQwUAmhpnUBOQKUiAQ4EZ6e7 EQpuyEIa1J3fbfDLn3vf/9TdcdJE7vM++s5DmzhZ0ycJQqFvd1bDGdx/gjJvWpTc4FwS VldQ== X-Received: by 10.152.116.49 with SMTP id jt17mr43713822lab.82.1435223502046; Thu, 25 Jun 2015 02:11:42 -0700 (PDT) Received: from localhost.localdomain ([109.110.66.238]) by mx.google.com with ESMTPSA id ew11sm7055678lac.31.2015.06.25.02.11.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jun 2015 02:11:41 -0700 (PDT) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Subject: [PATCH 3/3] rbd: queue_depth map option Date: Thu, 25 Jun 2015 12:11:20 +0300 Message-Id: <1435223480-35238-4-git-send-email-idryomov@gmail.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1435223480-35238-1-git-send-email-idryomov@gmail.com> References: <1435223480-35238-1-git-send-email-idryomov@gmail.com> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Spam-Status: No, score=-3.5 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FROM,RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_DNSWL_HI, RCVD_IN_SBL_CSS,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP nr_requests (/sys/block/rbd/queue/nr_requests) is pretty much irrelevant in blk-mq case because each driver sets its own max depth that it can handle and that's the number of tags that gets preallocated on setup. Users can't increase queue depth beyond that value via writing to nr_requests. For rbd we are happy with the default BLKDEV_MAX_RQ (128) for most cases but we want to give users the opportunity to increase it. Introduce a new per-device queue_depth option to do just that: $ sudo rbd map -o queue_depth=1024 ... Signed-off-by: Ilya Dryomov Reviewed-by: Alex Elder --- drivers/block/rbd.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index e502bce02d2c..b316ee48a30b 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -728,6 +728,7 @@ static struct rbd_client *rbd_client_find(struct ceph_options *ceph_opts) * (Per device) rbd map options */ enum { + Opt_queue_depth, Opt_last_int, /* int args above */ Opt_last_string, @@ -738,6 +739,7 @@ enum { }; static match_table_t rbd_opts_tokens = { + {Opt_queue_depth, "queue_depth=%d"}, /* int args above */ /* string args above */ {Opt_read_only, "read_only"}, @@ -748,9 +750,11 @@ static match_table_t rbd_opts_tokens = { }; struct rbd_options { + int queue_depth; bool read_only; }; +#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_MAX_RQ #define RBD_READ_ONLY_DEFAULT false static int parse_rbd_opts_token(char *c, void *private) @@ -774,6 +778,13 @@ static int parse_rbd_opts_token(char *c, void *private) } switch (token) { + case Opt_queue_depth: + if (intval < 1) { + pr_err("queue_depth out of range\n"); + return -EINVAL; + } + rbd_opts->queue_depth = intval; + break; case Opt_read_only: rbd_opts->read_only = true; break; @@ -3761,10 +3772,9 @@ static int rbd_init_disk(struct rbd_device *rbd_dev) memset(&rbd_dev->tag_set, 0, sizeof(rbd_dev->tag_set)); rbd_dev->tag_set.ops = &rbd_mq_ops; - rbd_dev->tag_set.queue_depth = BLKDEV_MAX_RQ; + rbd_dev->tag_set.queue_depth = rbd_dev->opts->queue_depth; rbd_dev->tag_set.numa_node = NUMA_NO_NODE; - rbd_dev->tag_set.flags = - BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; + rbd_dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; rbd_dev->tag_set.nr_hw_queues = 1; rbd_dev->tag_set.cmd_size = sizeof(struct work_struct); @@ -4948,6 +4958,7 @@ static int rbd_add_parse_args(const char *buf, goto out_mem; rbd_opts->read_only = RBD_READ_ONLY_DEFAULT; + rbd_opts->queue_depth = RBD_QUEUE_DEPTH_DEFAULT; copts = ceph_parse_options(options, mon_addrs, mon_addrs + mon_addrs_size - 1,