From patchwork Thu Feb 17 03:13:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Jianchao X-Patchwork-Id: 12749266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66273C433EF for ; Thu, 17 Feb 2022 03:15:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232517AbiBQDPc (ORCPT ); Wed, 16 Feb 2022 22:15:32 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:56956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231329AbiBQDPc (ORCPT ); Wed, 16 Feb 2022 22:15:32 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A1B02727B1; Wed, 16 Feb 2022 19:15:18 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id j4so3544639plj.8; Wed, 16 Feb 2022 19:15:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Y96CcPFEkVXbrXscq6Sn+O6RSyK3CXfofnPfiW5q5Io=; b=EwmEftmot6qt/LZaMyAAJXFs/D2fOaguQzaDrONmOzW9RYASO3uwBM3PTKW+tt1RMr 8yck6oYkyrvVFIhX2WKPuz6WMScQWjmbCSoohZ3weYwnAXcBlUolCA0/4/SFeaf6IewI tPy/Ht4qfHXjnPqaoWGz6WqY381F9WXUwf7UkfJLqo4KQuOlQpj/AbFilkAwJzEeeZve xwWUHdXeSneIlQiGR3Mszt4ZtY1s2AYn+sL/J/hHSUibxiNP32kNg9/KYwd+5Vst5aOi R2tmOIjm3RqR1j4G/S3LyBi781QH0I5tkGlMUV/W5imvZe8hBsbilCkmpToJkzpsLuPL WdpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Y96CcPFEkVXbrXscq6Sn+O6RSyK3CXfofnPfiW5q5Io=; b=v3PglcrErP7SivwNasbTsschHBqA2HL/QmQREjGbWtfqA9WebOHLLGeP3FQL3W9toj O5nRqB3z+1DKGgjFKXHhI1CfjBCMAqtNW3OGYWIffePAsw0cj1iG6O47yiYhA9dQWvAi bucKHHuHcne1YquNmysEWix1r3dI5eowZ/k35R0D6/nPQ3DQR9WbbcXxZwAt1ps+erGL 1Zy6Lty0/QVolYvLFfWrD2aIndbdXwkavATUDoBaxZp3lFBM6+Jo6sCRMANZzhmdyMtJ 96gNFrcd1SWExPlCcVB34E9PJiQ6yt3QCYn14XGf6bweGvDiJfhPubx8Ic5Z8z8AOCFY fXVQ== X-Gm-Message-State: AOAM532eGV75uERCatvywjwo89d89LNSciucRZl5NzOqGtNwHetpl8fq wySGnKbLfOL/ql56fqtRuqQ= X-Google-Smtp-Source: ABdhPJwJnkK4QJsLQAQucsGnj2b87w6DjPbQRTTPrPCTcG/l7n1p7A/ne7gJBbPs+qfr0BXlUcxBlg== X-Received: by 2002:a17:903:292:b0:149:460a:9901 with SMTP id j18-20020a170903029200b00149460a9901mr1033271plr.44.1645067717831; Wed, 16 Feb 2022 19:15:17 -0800 (PST) Received: from localhost.localdomain ([162.219.34.248]) by smtp.gmail.com with ESMTPSA id cu21sm421018pjb.50.2022.02.16.19.15.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 19:15:17 -0800 (PST) From: "Wang Jianchao (Kuaishou)" To: Jens Axboe Cc: Josef Bacik , Tejun Heo , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC V4 1/6] blk: prepare to make blk-rq-qos pluggable and modular Date: Thu, 17 Feb 2022 11:13:44 +0800 Message-Id: <20220217031349.98561-2-jianchao.wan9@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220217031349.98561-1-jianchao.wan9@gmail.com> References: <20220217031349.98561-1-jianchao.wan9@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org blk-rq-qos is a standalone framework out of io-sched and can be used to control or observe the IO progress in block-layer with hooks. blk-rq-qos is a great design but right now, it is totally fixed and built-in and shut out peoples who want to use it with external module. This patch make blk-rq-qos policies pluggable and modular. (1) Add code to maintain the rq_qos_ops. A rq-qos module need to register itself with rq_qos_register(). The original enum rq_qos_id will be removed in following patch. They will use a dynamic id maintained by rq_qos_ida. (2) Add .init callback into rq_qos_ops. We use it to initialize the resource. (3) Add /sys/block/x/queue/qos We can use '+name' or "-name" to open or close the blk-rq-qos policy. This patch mainly prepare help interfaces and no functional changes. Following patches will adpat the code of wbt, iolatency, iocost and ioprio to make them pluggable and modular one by one. And after that, the /sys/block/xxx/queue/qos interfaces will be exported. Signed-off-by: Wang Jianchao (Kuaishou) --- block/blk-mq-debugfs.c | 9 +- block/blk-rq-qos.c | 301 ++++++++++++++++++++++++++++++++++++++++- block/blk-rq-qos.h | 39 +++++- include/linux/blkdev.h | 4 + 4 files changed, 348 insertions(+), 5 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 3a790eb4995c..8b6d557e1ad6 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -729,7 +729,10 @@ void blk_mq_debugfs_register(struct request_queue *q) if (q->rq_qos) { struct rq_qos *rqos = q->rq_qos; - + /* + * queue has not been registered right now, it is safe to + * iterate the rqos w/o lock + */ while (rqos) { blk_mq_debugfs_register_rqos(rqos); rqos = rqos->next; @@ -844,7 +847,9 @@ void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos) void blk_mq_debugfs_register_rqos(struct rq_qos *rqos) { struct request_queue *q = rqos->q; - const char *dir_name = rq_qos_id_to_name(rqos->id); + const char *dir_name; + + dir_name = rqos->ops->name ? rqos->ops->name : rq_qos_id_to_name(rqos->id); if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs) return; diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c index e83af7bc7591..db13581ae878 100644 --- a/block/blk-rq-qos.c +++ b/block/blk-rq-qos.c @@ -2,6 +2,11 @@ #include "blk-rq-qos.h" +static DEFINE_IDA(rq_qos_ida); +static int nr_rqos_blkcg_pols; +static DEFINE_MUTEX(rq_qos_mutex); +static LIST_HEAD(rq_qos_list); + /* * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded, * false if 'v' + 1 would be bigger than 'below'. @@ -294,11 +299,303 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data, void rq_qos_exit(struct request_queue *q) { - blk_mq_debugfs_unregister_queue_rqos(q); - + /* + * queue must have been unregistered here, it is safe to iterate + * the list w/o lock + */ while (q->rq_qos) { struct rq_qos *rqos = q->rq_qos; q->rq_qos = rqos->next; rqos->ops->exit(rqos); } + blk_mq_debugfs_unregister_queue_rqos(q); +} + +static struct rq_qos *rq_qos_by_name(struct request_queue *q, + const char *name) +{ + struct rq_qos *rqos; + + for (rqos = q->rq_qos; rqos; rqos = rqos->next) { + if (!rqos->ops->name) + continue; + + if (!strncmp(rqos->ops->name, name, + strlen(rqos->ops->name))) + return rqos; + } + return NULL; +} + +/* + * After the pluggable blk-qos, rqos's life cycle become complicated, + * as we may modify the rqos list there. Except for the places where + * queue is not registered, there are following places may access rqos + * list concurrently: + * (1) normal IO path, can be serialized by queue freezing + * (2) blkg_create, the .pd_init_fn() may access rqos, can be serialized + * by queue_lock. + * (3) cgroup file, such as ioc_cost_model_write, rq_qos_get is for this + * case to keep the rqos alive. + */ +struct rq_qos *rq_qos_get(struct request_queue *q, int id) +{ + struct rq_qos *rqos; + + spin_lock_irq(&q->queue_lock); + rqos = rq_qos_by_id(q, id); + if (rqos && rqos->dying) + rqos = NULL; + if (rqos) + refcount_inc(&rqos->ref); + spin_unlock_irq(&q->queue_lock); + return rqos; +} +EXPORT_SYMBOL_GPL(rq_qos_get); + +void rq_qos_put(struct rq_qos *rqos) +{ + struct request_queue *q = rqos->q; + + spin_lock_irq(&q->queue_lock); + refcount_dec(&rqos->ref); + if (rqos->dying) + wake_up(&rqos->waitq); + spin_unlock_irq(&q->queue_lock); +} +EXPORT_SYMBOL_GPL(rq_qos_put); + +void rq_qos_activate(struct request_queue *q, + struct rq_qos *rqos, const struct rq_qos_ops *ops) +{ + struct rq_qos *pos; + + rqos->dying = false; + refcount_set(&rqos->ref, 1); + init_waitqueue_head(&rqos->waitq); + rqos->id = ops->id; + rqos->ops = ops; + rqos->q = q; + rqos->next = NULL; + + spin_lock_irq(&q->queue_lock); + pos = q->rq_qos; + if (pos) { + while (pos->next) + pos = pos->next; + pos->next = rqos; + } else { + q->rq_qos = rqos; + } + spin_unlock_irq(&q->queue_lock); + + if (rqos->ops->debugfs_attrs) + blk_mq_debugfs_register_rqos(rqos); + + if (ops->owner) + __module_get(ops->owner); +} +EXPORT_SYMBOL_GPL(rq_qos_activate); + +void rq_qos_deactivate(struct rq_qos *rqos) +{ + struct request_queue *q = rqos->q; + struct rq_qos **cur; + + spin_lock_irq(&q->queue_lock); + rqos->dying = true; + /* + * Drain all of the usage of get/put_rqos() + */ + wait_event_lock_irq(rqos->waitq, + refcount_read(&rqos->ref) == 1, q->queue_lock); + for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) { + if (*cur == rqos) { + *cur = rqos->next; + break; + } + } + spin_unlock_irq(&q->queue_lock); + blk_mq_debugfs_unregister_rqos(rqos); + + if (rqos->ops->owner) + module_put(rqos->ops->owner); +} +EXPORT_SYMBOL_GPL(rq_qos_deactivate); + +static struct rq_qos_ops *rq_qos_op_find(const char *name) +{ + struct rq_qos_ops *pos; + + list_for_each_entry(pos, &rq_qos_list, node) { + if (!strncmp(pos->name, name, strlen(pos->name))) + return pos; + } + + return NULL; +} + +int rq_qos_register(struct rq_qos_ops *ops) +{ + int ret, start; + + mutex_lock(&rq_qos_mutex); + + if (rq_qos_op_find(ops->name)) { + ret = -EEXIST; + goto out; + } + + if (ops->flags & RQOS_FLAG_CGRP_POL && + nr_rqos_blkcg_pols >= (BLKCG_MAX_POLS - BLKCG_NON_RQOS_POLS)) { + ret = -ENOSPC; + goto out; + } + + start = RQ_QOS_IOPRIO + 1; + ret = ida_simple_get(&rq_qos_ida, start, INT_MAX, GFP_KERNEL); + if (ret < 0) + goto out; + + if (ops->flags & RQOS_FLAG_CGRP_POL) + nr_rqos_blkcg_pols++; + + ops->id = ret; + ret = 0; + INIT_LIST_HEAD(&ops->node); + list_add_tail(&ops->node, &rq_qos_list); +out: + mutex_unlock(&rq_qos_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(rq_qos_register); + +void rq_qos_unregister(struct rq_qos_ops *ops) +{ + mutex_lock(&rq_qos_mutex); + + if (ops->flags & RQOS_FLAG_CGRP_POL) + nr_rqos_blkcg_pols--; + list_del_init(&ops->node); + ida_simple_remove(&rq_qos_ida, ops->id); + mutex_unlock(&rq_qos_mutex); +} +EXPORT_SYMBOL_GPL(rq_qos_unregister); + +ssize_t queue_qos_show(struct request_queue *q, char *buf) +{ + struct rq_qos_ops *ops; + struct rq_qos *rqos; + int ret = 0; + + mutex_lock(&rq_qos_mutex); + /* + * Show the policies in the order of being invoked. + * queue_lock is not needed here as the sysfs_lock is + * protected us from the queue_qos_store() + */ + for (rqos = q->rq_qos; rqos; rqos = rqos->next) { + if (!rqos->ops->name) + continue; + ret += sprintf(buf + ret, "[%s] ", rqos->ops->name); + } + list_for_each_entry(ops, &rq_qos_list, node) { + if (!rq_qos_by_name(q, ops->name)) + ret += sprintf(buf + ret, "%s ", ops->name); + } + + ret--; /* overwrite the last space */ + ret += sprintf(buf + ret, "\n"); + mutex_unlock(&rq_qos_mutex); + + return ret; +} + +static int rq_qos_switch(struct request_queue *q, + const struct rq_qos_ops *ops, + struct rq_qos *rqos) +{ + int ret; + + blk_mq_freeze_queue(q); + if (!rqos) { + ret = ops->init(q); + } else { + ops->exit(rqos); + ret = 0; + } + blk_mq_unfreeze_queue(q); + + return ret; +} + +ssize_t queue_qos_store(struct request_queue *q, const char *page, + size_t count) +{ + const struct rq_qos_ops *ops; + struct rq_qos *rqos; + const char *qosname; + char *buf; + bool add; + int ret; + + if (!blk_queue_registered(q)) + return -ENOENT; + + buf = kstrdup(page, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + buf = strim(buf); + if (buf[0] != '+' && buf[0] != '-') { + ret = -EINVAL; + goto out; + } + + add = buf[0] == '+'; + qosname = buf + 1; + + rqos = rq_qos_by_name(q, qosname); + if ((buf[0] == '+' && rqos)) { + ret = -EEXIST; + goto out; + } + + if ((buf[0] == '-' && !rqos)) { + ret = -ENODEV; + goto out; + } + + if (add) { + mutex_lock(&rq_qos_mutex); + ops = rq_qos_op_find(qosname); + if (!ops) { + /* + * module_init callback may request this mutex + */ + mutex_unlock(&rq_qos_mutex); + request_module("%s", qosname); + mutex_lock(&rq_qos_mutex); + ops = rq_qos_op_find(qosname); + } + if (!ops) { + ret = -EINVAL; + } else if (ops->owner && !try_module_get(ops->owner)) { + ops = NULL; + ret = -EAGAIN; + } + mutex_unlock(&rq_qos_mutex); + if (!ops) + goto out; + } else { + ops = rqos->ops; + } + + ret = rq_qos_switch(q, ops, add ? NULL : rqos); + + if (add) + module_put(ops->owner); +out: + kfree(buf); + return ret ? ret : count; } diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 3cfbc8668cba..586c3f5ec152 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -26,16 +26,28 @@ struct rq_wait { }; struct rq_qos { - struct rq_qos_ops *ops; + const struct rq_qos_ops *ops; struct request_queue *q; enum rq_qos_id id; + refcount_t ref; + wait_queue_head_t waitq; + bool dying; struct rq_qos *next; #ifdef CONFIG_BLK_DEBUG_FS struct dentry *debugfs_dir; #endif }; +enum { + RQOS_FLAG_CGRP_POL = 1 << 0, +}; + struct rq_qos_ops { + struct list_head node; + struct module *owner; + const char *name; + int flags; + int id; void (*throttle)(struct rq_qos *, struct bio *); void (*track)(struct rq_qos *, struct request *, struct bio *); void (*merge)(struct rq_qos *, struct request *, struct bio *); @@ -46,6 +58,7 @@ struct rq_qos_ops { void (*cleanup)(struct rq_qos *, struct bio *); void (*queue_depth_changed)(struct rq_qos *); void (*exit)(struct rq_qos *); + int (*init)(struct request_queue *); const struct blk_mq_debugfs_attr *debugfs_attrs; }; @@ -70,6 +83,19 @@ static inline struct rq_qos *rq_qos_id(struct request_queue *q, return rqos; } +static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id) +{ + struct rq_qos *rqos; + + WARN_ON(!mutex_is_locked(&q->sysfs_lock) && !spin_is_locked(&q->queue_lock)); + + for (rqos = q->rq_qos; rqos; rqos = rqos->next) { + if (rqos->id == id) + break; + } + return rqos; +} + static inline struct rq_qos *wbt_rq_qos(struct request_queue *q) { return rq_qos_id(q, RQ_QOS_WBT); @@ -132,6 +158,17 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos) blk_mq_debugfs_unregister_rqos(rqos); } +int rq_qos_register(struct rq_qos_ops *ops); +void rq_qos_unregister(struct rq_qos_ops *ops); +void rq_qos_activate(struct request_queue *q, + struct rq_qos *rqos, const struct rq_qos_ops *ops); +void rq_qos_deactivate(struct rq_qos *rqos); +ssize_t queue_qos_show(struct request_queue *q, char *buf); +ssize_t queue_qos_store(struct request_queue *q, const char *page, + size_t count); +struct rq_qos *rq_qos_get(struct request_queue *q, int id); +void rq_qos_put(struct rq_qos *rqos); + typedef bool (acquire_inflight_cb_t)(struct rq_wait *rqw, void *private_data); typedef void (cleanup_cb_t)(struct rq_wait *rqw, void *private_data); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index f35aea98bc35..d5698a7cda67 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -44,6 +44,10 @@ struct blk_crypto_profile; * Defined here to simplify include dependency. */ #define BLKCG_MAX_POLS 6 +/* + * Non blk-rq-qos blkcg policies include blk-throttle and bfq + */ +#define BLKCG_NON_RQOS_POLS 2 static inline int blk_validate_block_size(unsigned long bsize) { From patchwork Thu Feb 17 03:13:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Jianchao X-Patchwork-Id: 12749268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05F4FC433FE for ; Thu, 17 Feb 2022 03:15:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232552AbiBQDPv (ORCPT ); Wed, 16 Feb 2022 22:15:51 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:57246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232533AbiBQDPg (ORCPT ); Wed, 16 Feb 2022 22:15:36 -0500 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC31627828C; Wed, 16 Feb 2022 19:15:20 -0800 (PST) Received: by mail-pf1-x434.google.com with SMTP id d17so3885117pfl.0; Wed, 16 Feb 2022 19:15:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PO6QTLbmwgbeaFwMxrG1oK4/zim0ER7h1OgHbUIGmY0=; b=oQ0Ucfn9bqRVTFBOaKmocHV/p32+MGGIHIoYQ+AafT0IPac6LOqUNkk4/oF37YEQdJ KI+r7qsi+VmP9NF/Q2ot5b2U7mb2YRy6DABAsyn9z3PnmW3+8VPpwgbyt7Zhc6mVhz7T MccuSJ2KAkC6JUXs8pPGwKnjdlzVaOz7YX9tapKRj5DPS436JHjrK+F9tQQko3ZFPj/J xw/JqIY1NIQNC7GQn4qCRhNEkUB6hpCS/fGS2auidZyHjjNuX9Uv9Nibgn4r+5FbwlRY oW5tiCvSyaFSh4uaScKHGKeTSyVP9uTZs+VIbo9pZ06xyNgRJtQO3ouO7nd/kBxEYMyF OiKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PO6QTLbmwgbeaFwMxrG1oK4/zim0ER7h1OgHbUIGmY0=; b=f/FBwqFOoLpE6T6Jpl8CCDFL/Zn+sY73DUXE71u1pizDgSslCsTVXsRIpdHHPK7T1O +7bKvEGFXMaBvrlVWiChb2CNZip+Sk6D6L/qUHB+Kie/FXbePG74St3ZC2WxMn+N7iiU ChmeZJZDQ7tGeitqWTvg2v0V1U9N0HeQu0UpHH7rsU2Qbu6QZ8RjfFyqMq47FK48e+ts PT4zG4SunZxs1mLnKOpj2Mr+RUWz4WZgYqZOFG2xg59QWcrqv2RL3yXYVDSw70ulqCTL i2nd4GWNAWc1OglzmQO0hbP1ConZwhlhZ2aHt+XcQJoo87Fh6jFbSGdIGYoGboKH30O5 zAGg== X-Gm-Message-State: AOAM530P64QYT0VbkPX2ORE4hGH8u7rZJqdpO3OHW6T0VIMmT6NGjgya bed0uTUXux9qKexJjOaAfzM= X-Google-Smtp-Source: ABdhPJzGnKO9bHzhlkTtZOmhohr0GcwQTUiU9fD8GGJauE6Xj2rLY/lcHMYTLzlagh6l9poPqhxyYw== X-Received: by 2002:a63:d116:0:b0:372:9961:4203 with SMTP id k22-20020a63d116000000b0037299614203mr830923pgg.361.1645067720415; Wed, 16 Feb 2022 19:15:20 -0800 (PST) Received: from localhost.localdomain ([162.219.34.248]) by smtp.gmail.com with ESMTPSA id cu21sm421018pjb.50.2022.02.16.19.15.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 19:15:20 -0800 (PST) From: "Wang Jianchao (Kuaishou)" To: Jens Axboe Cc: Josef Bacik , Tejun Heo , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC V4 2/6] blk-wbt: make wbt pluggable Date: Thu, 17 Feb 2022 11:13:45 +0800 Message-Id: <20220217031349.98561-3-jianchao.wan9@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220217031349.98561-1-jianchao.wan9@gmail.com> References: <20220217031349.98561-1-jianchao.wan9@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This patch makes wbt pluggable through /sys/block/xxx/queue/qos. Some queue_lock/unlock is added to protect rq_qos_by_id() in wbt_rq_qos(). By default, wbt is enabled which is same with previous code. Signed-off-by: Wang Jianchao (Kuaishou) --- block/blk-mq-debugfs.c | 2 -- block/blk-rq-qos.h | 6 ----- block/blk-wbt.c | 51 +++++++++++++++++++++++++++++++++++------- block/blk-wbt.h | 6 +++++ 4 files changed, 49 insertions(+), 16 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 8b6d557e1ad6..a5d78b094234 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -826,8 +826,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q) static const char *rq_qos_id_to_name(enum rq_qos_id id) { switch (id) { - case RQ_QOS_WBT: - return "wbt"; case RQ_QOS_LATENCY: return "latency"; case RQ_QOS_COST: diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 586c3f5ec152..171a83d6de45 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -14,7 +14,6 @@ struct blk_mq_debugfs_attr; enum rq_qos_id { - RQ_QOS_WBT, RQ_QOS_LATENCY, RQ_QOS_COST, RQ_QOS_IOPRIO, @@ -96,11 +95,6 @@ static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id) return rqos; } -static inline struct rq_qos *wbt_rq_qos(struct request_queue *q) -{ - return rq_qos_id(q, RQ_QOS_WBT); -} - static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q) { return rq_qos_id(q, RQ_QOS_LATENCY); diff --git a/block/blk-wbt.c b/block/blk-wbt.c index 0c119be0e813..8aa85303bbbc 100644 --- a/block/blk-wbt.c +++ b/block/blk-wbt.c @@ -31,6 +31,13 @@ #define CREATE_TRACE_POINTS #include +static struct rq_qos_ops wbt_rqos_ops; + +struct rq_qos *wbt_rq_qos(struct request_queue *q) +{ + return rq_qos_by_id(q, wbt_rqos_ops.id); +} + static inline void wbt_clear_state(struct request *rq) { rq->wbt_flags = 0; @@ -628,9 +635,13 @@ static void wbt_requeue(struct rq_qos *rqos, struct request *rq) void wbt_set_write_cache(struct request_queue *q, bool write_cache_on) { - struct rq_qos *rqos = wbt_rq_qos(q); + struct rq_qos *rqos; + + spin_lock_irq(&q->queue_lock); + rqos = wbt_rq_qos(q); if (rqos) RQWB(rqos)->wc = write_cache_on; + spin_unlock_irq(&q->queue_lock); } /* @@ -638,14 +649,20 @@ void wbt_set_write_cache(struct request_queue *q, bool write_cache_on) */ void wbt_enable_default(struct request_queue *q) { - struct rq_qos *rqos = wbt_rq_qos(q); + struct rq_qos *rqos; + + spin_lock_irq(&q->queue_lock); + rqos = wbt_rq_qos(q); /* Throttling already enabled? */ if (rqos) { if (RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT) RQWB(rqos)->enable_state = WBT_STATE_ON_DEFAULT; + + spin_unlock_irq(&q->queue_lock); return; } + spin_unlock_irq(&q->queue_lock); /* Queue not registered? Maybe shutting down... */ if (!blk_queue_registered(q)) @@ -692,6 +709,7 @@ static void wbt_exit(struct rq_qos *rqos) struct rq_wb *rwb = RQWB(rqos); struct request_queue *q = rqos->q; + rq_qos_deactivate(rqos); blk_stat_remove_callback(q, rwb->cb); blk_stat_free_callback(rwb->cb); kfree(rwb); @@ -702,15 +720,21 @@ static void wbt_exit(struct rq_qos *rqos) */ void wbt_disable_default(struct request_queue *q) { - struct rq_qos *rqos = wbt_rq_qos(q); + struct rq_qos *rqos; struct rq_wb *rwb; + + spin_lock_irq(&q->queue_lock); + rqos = wbt_rq_qos(q); if (!rqos) - return; + goto out; + rwb = RQWB(rqos); if (rwb->enable_state == WBT_STATE_ON_DEFAULT) { blk_stat_deactivate(rwb->cb); rwb->enable_state = WBT_STATE_OFF_DEFAULT; } +out: + spin_unlock_irq(&q->queue_lock); } EXPORT_SYMBOL_GPL(wbt_disable_default); @@ -803,6 +827,7 @@ static const struct blk_mq_debugfs_attr wbt_debugfs_attrs[] = { #endif static struct rq_qos_ops wbt_rqos_ops = { + .name = "blk-wbt", .throttle = wbt_wait, .issue = wbt_issue, .track = wbt_track, @@ -811,6 +836,7 @@ static struct rq_qos_ops wbt_rqos_ops = { .cleanup = wbt_cleanup, .queue_depth_changed = wbt_queue_depth_changed, .exit = wbt_exit, + .init = wbt_init, #ifdef CONFIG_BLK_DEBUG_FS .debugfs_attrs = wbt_debugfs_attrs, #endif @@ -834,9 +860,6 @@ int wbt_init(struct request_queue *q) for (i = 0; i < WBT_NUM_RWQ; i++) rq_wait_init(&rwb->rq_wait[i]); - rwb->rqos.id = RQ_QOS_WBT; - rwb->rqos.ops = &wbt_rqos_ops; - rwb->rqos.q = q; rwb->last_comp = rwb->last_issue = jiffies; rwb->win_nsec = RWB_WINDOW_NSEC; rwb->enable_state = WBT_STATE_ON_DEFAULT; @@ -846,7 +869,7 @@ int wbt_init(struct request_queue *q) /* * Assign rwb and add the stats callback. */ - rq_qos_add(q, &rwb->rqos); + rq_qos_activate(q, &rwb->rqos, &wbt_rqos_ops); blk_stat_add_callback(q, rwb->cb); rwb->min_lat_nsec = wbt_default_latency_nsec(q); @@ -856,3 +879,15 @@ int wbt_init(struct request_queue *q) return 0; } + +static __init int wbt_mod_init(void) +{ + return rq_qos_register(&wbt_rqos_ops); +} + +static __exit void wbt_mod_exit(void) +{ + return rq_qos_unregister(&wbt_rqos_ops); +} +module_init(wbt_mod_init); +module_exit(wbt_mod_exit); diff --git a/block/blk-wbt.h b/block/blk-wbt.h index 2eb01becde8c..e0d051cb00f7 100644 --- a/block/blk-wbt.h +++ b/block/blk-wbt.h @@ -88,6 +88,7 @@ static inline unsigned int wbt_inflight(struct rq_wb *rwb) #ifdef CONFIG_BLK_WBT +struct rq_qos *wbt_rq_qos(struct request_queue *q); int wbt_init(struct request_queue *); void wbt_disable_default(struct request_queue *); void wbt_enable_default(struct request_queue *); @@ -101,6 +102,11 @@ u64 wbt_default_latency_nsec(struct request_queue *); #else +static inline struct rq_qos *wbt_rq_qos(struct request_queue *q) +{ + return NULL; +} + static inline void wbt_track(struct request *rq, enum wbt_flags flags) { } From patchwork Thu Feb 17 03:13:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Jianchao X-Patchwork-Id: 12749267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6C75C433F5 for ; Thu, 17 Feb 2022 03:15:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231329AbiBQDPv (ORCPT ); Wed, 16 Feb 2022 22:15:51 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:57620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232573AbiBQDPl (ORCPT ); Wed, 16 Feb 2022 22:15:41 -0500 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 004C929413D; Wed, 16 Feb 2022 19:15:23 -0800 (PST) Received: by mail-pf1-x42c.google.com with SMTP id c4so3825922pfl.7; Wed, 16 Feb 2022 19:15:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9Fv0yxs7Due0iTDo5Cr/QA0fQkobiZAYipeprXXiS0s=; b=hrLvcSCOD+vQi7IgYUszEV8dWjnsRtLiF/qFDdSa+U4cnACQBh/ctsf+MSChxJsQR0 7KjyNijwSngm3fAYawhSrdriCFsf9nsUI0DUWHGDTiRFhMqcHYTlfiMQbY7Y4HaFqM8J zKI/amqOGN/g1irI35BYYg7wVstrpQsAFJxW6p+0CQFfV6V/auiAXU7RFkpAt3Za2+s7 x3Ljaw7gWZjv67XYEuwp0HmpoVntQr/VOBSkB6SRpN8O6EgSNhewBjtRDToUNifpOv5I +0lTuL1vOPeuN+IviVNI9FBYTyrabjZnnXAt25S3RpMnNvbh4sP5IqaWmXhMQQ7+qVkB oQ1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9Fv0yxs7Due0iTDo5Cr/QA0fQkobiZAYipeprXXiS0s=; b=u23MnY+ltdOUd+A3McsBb7Z5DJd5yCEt8s1mA+Cwl0rONaAkIAAUw7lwsDrPgRDtix BkitOMWbtaJF0Zz/rlNibWM+cYlRPPlh7sXXERvuv+L7ASf5bGkwWenckZ5pUd7MF7gP Exku7E46btCj2uD9rp9ZH2iQMb7rtVCnNjsvxit9+C48hXK3cXobQhYiJrVcSnIQwE5v plqWtle0r56+TXoNQC1hNwlVPEyZos+dZYMR1hMW7CBT4m6SI4k7RfejG/DNi88C5d5v /PEJ2FZCyQcqOXfFNvQUogPAOiqYDce9pF0t5Q1WyAK0Li/RmR+4Ohnf5yo8PYsyPlOg cIrw== X-Gm-Message-State: AOAM532aCuDV4pmNy+8tGxM7CFzSg0HfYuNr19t9+oKzgiSoJROnFPYG BZTuUMJ7BVc7qxSvmQbU5LY= X-Google-Smtp-Source: ABdhPJxH0k4m9rHU+Q6m54KhxFOGVUeDmq3yyjPqUY+rrDE6tnsx+FcY4TsO3XWFo9QzXAj66SDNHA== X-Received: by 2002:a63:fe01:0:b0:372:b258:a8c9 with SMTP id p1-20020a63fe01000000b00372b258a8c9mr862999pgh.376.1645067723051; Wed, 16 Feb 2022 19:15:23 -0800 (PST) Received: from localhost.localdomain ([162.219.34.248]) by smtp.gmail.com with ESMTPSA id cu21sm421018pjb.50.2022.02.16.19.15.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 19:15:22 -0800 (PST) From: "Wang Jianchao (Kuaishou)" To: Jens Axboe Cc: Josef Bacik , Tejun Heo , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC V4 3/6] blk-iolatency: make iolatency pluggable Date: Thu, 17 Feb 2022 11:13:46 +0800 Message-Id: <20220217031349.98561-4-jianchao.wan9@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220217031349.98561-1-jianchao.wan9@gmail.com> References: <20220217031349.98561-1-jianchao.wan9@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk-iolatency pluggable. Then we can close or open it through /sys/block/xxx/queue/qos. Signed-off-by: Wang Jianchao (Kuaishou) --- block/blk-cgroup.c | 6 ------ block/blk-iolatency.c | 34 ++++++++++++++++++++++++++-------- block/blk-mq-debugfs.c | 2 -- block/blk-rq-qos.h | 6 ------ block/blk.h | 6 ------ 5 files changed, 26 insertions(+), 28 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 650f7e27989f..3ae2aa557aef 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1203,12 +1203,6 @@ int blkcg_init_queue(struct request_queue *q) if (ret) goto err_destroy_all; - ret = blk_iolatency_init(q); - if (ret) { - blk_throtl_exit(q); - goto err_destroy_all; - } - return 0; err_destroy_all: diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 6593c7123b97..a8a201d6d669 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -90,6 +90,12 @@ struct blk_iolatency { atomic_t enabled; }; +static struct rq_qos_ops blkcg_iolatency_ops; +static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q) +{ + return rq_qos_by_id(q, blkcg_iolatency_ops.id); +} + static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos) { return container_of(rqos, struct blk_iolatency, rqos); @@ -646,13 +652,19 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos) del_timer_sync(&blkiolat->timer); blkcg_deactivate_policy(rqos->q, &blkcg_policy_iolatency); + rq_qos_deactivate(rqos); kfree(blkiolat); } +static int blk_iolatency_init(struct request_queue *q); + static struct rq_qos_ops blkcg_iolatency_ops = { + .name = "blk-iolat", + .flags = RQOS_FLAG_CGRP_POL, .throttle = blkcg_iolatency_throttle, .done_bio = blkcg_iolatency_done_bio, .exit = blkcg_iolatency_exit, + .init = blk_iolatency_init, }; static void blkiolatency_timer_fn(struct timer_list *t) @@ -727,15 +739,10 @@ int blk_iolatency_init(struct request_queue *q) return -ENOMEM; rqos = &blkiolat->rqos; - rqos->id = RQ_QOS_LATENCY; - rqos->ops = &blkcg_iolatency_ops; - rqos->q = q; - - rq_qos_add(q, rqos); - + rq_qos_activate(q, rqos, &blkcg_iolatency_ops); ret = blkcg_activate_policy(q, &blkcg_policy_iolatency); if (ret) { - rq_qos_del(q, rqos); + rq_qos_deactivate(rqos); kfree(blkiolat); return ret; } @@ -1046,12 +1053,23 @@ static struct blkcg_policy blkcg_policy_iolatency = { static int __init iolatency_init(void) { - return blkcg_policy_register(&blkcg_policy_iolatency); + int ret; + + ret = rq_qos_register(&blkcg_iolatency_ops); + if (ret) + return ret; + + ret = blkcg_policy_register(&blkcg_policy_iolatency); + if (ret) + rq_qos_unregister(&blkcg_iolatency_ops); + + return ret; } static void __exit iolatency_exit(void) { blkcg_policy_unregister(&blkcg_policy_iolatency); + rq_qos_unregister(&blkcg_iolatency_ops); } module_init(iolatency_init); diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index a5d78b094234..918870b8de5b 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -826,8 +826,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q) static const char *rq_qos_id_to_name(enum rq_qos_id id) { switch (id) { - case RQ_QOS_LATENCY: - return "latency"; case RQ_QOS_COST: return "cost"; case RQ_QOS_IOPRIO: diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 171a83d6de45..2a919db52fef 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -14,7 +14,6 @@ struct blk_mq_debugfs_attr; enum rq_qos_id { - RQ_QOS_LATENCY, RQ_QOS_COST, RQ_QOS_IOPRIO, }; @@ -95,11 +94,6 @@ static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id) return rqos; } -static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q) -{ - return rq_qos_id(q, RQ_QOS_LATENCY); -} - static inline void rq_wait_init(struct rq_wait *rq_wait) { atomic_set(&rq_wait->inflight, 0); diff --git a/block/blk.h b/block/blk.h index 8bd43b3ad33d..1a314257b6a3 100644 --- a/block/blk.h +++ b/block/blk.h @@ -400,12 +400,6 @@ static inline void blk_queue_bounce(struct request_queue *q, struct bio **bio) __blk_queue_bounce(q, bio); } -#ifdef CONFIG_BLK_CGROUP_IOLATENCY -extern int blk_iolatency_init(struct request_queue *q); -#else -static inline int blk_iolatency_init(struct request_queue *q) { return 0; } -#endif - struct bio *blk_next_bio(struct bio *bio, unsigned int nr_pages, gfp_t gfp); #ifdef CONFIG_BLK_DEV_ZONED From patchwork Thu Feb 17 03:13:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Jianchao X-Patchwork-Id: 12749270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D25EAC433EF for ; Thu, 17 Feb 2022 03:15:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232562AbiBQDPw (ORCPT ); Wed, 16 Feb 2022 22:15:52 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:57656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232589AbiBQDPl (ORCPT ); Wed, 16 Feb 2022 22:15:41 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 456032A8D14; Wed, 16 Feb 2022 19:15:26 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id u16so3798842pfg.12; Wed, 16 Feb 2022 19:15:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uOTJgTBT+ZIFMt1y2Z/+FupjRD5j4eNAIs2Pma1Hy3I=; b=YIbUE8otRKvbmcesWSrWQB75hDiB+nmxST822Wh6HoISAKDMDKyR5B8XEot9SPkKXe R5Ark5XhlTpsEJ9ciJsNDXuvuRxGvkHVh7nwoyQaJhkoCz1VvF+hpEUjZMtKHY0kZdNh 9TxKPAiHCms67THSbbK1w5CrHChGn2b/aRkXfRJ/m78LoS0PqxA0UALakS3Lwe0DIV9X FrLh8sdQpC7M3sh/tB75XNqsWqRy/4SwMOtg931mdfEW/6Tp3u/jA10HbtgLrSkIu4kT lTWRetDCAWpbFx8dHNq3yrD6FBazsw67WBoZYjGK2xtfz9JUZ8I9+KwOFdJ+PLmjVYFu n2ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uOTJgTBT+ZIFMt1y2Z/+FupjRD5j4eNAIs2Pma1Hy3I=; b=6NPc+sgC3AV+7csbD1z+zpwKI8olv1oF2bNuNY37uhwvJAaNC6jLlaa8cbQTW+jexC If7MWU3/akPdeAHRXy49/3RVtV3tx4zApg167SqO1afJ4/+0hXAD4zoThN2w/xlFMVWQ RKwRYpw4PiE4RpF97ArLq+HNrft1k59nyteAVSE1g6bQc3OGLaXm6ipWPg9Caif9Yhs1 73hvtgTQQXXqwi4fw9P55N3PYxv6zjEmaEfBK1sJmrPk8ZQbv+5ndtDegDRWoSO+565P YOsd9giUMtFAEUXydnkqFcsHOXn7qcONVu1/D7fJtWMSWrAezVwA9bn1zayUHKqSeah1 JruA== X-Gm-Message-State: AOAM5318/b+1GlAU5QjUcTOJZSLtZBqhXk0KK4157UsXaXEe3F+zwnlq REKw4lefS0+SOCEASEGU3xc= X-Google-Smtp-Source: ABdhPJwkb9z/0Ah9SRYfxtAtCNW1P+lPYlCpfrDneiLoIqPOfqz7fapvVT6iuEnzkMTbbq/+vXlM3w== X-Received: by 2002:a05:6a00:b51:b0:4c7:c1a3:3911 with SMTP id p17-20020a056a000b5100b004c7c1a33911mr1156277pfo.13.1645067725657; Wed, 16 Feb 2022 19:15:25 -0800 (PST) Received: from localhost.localdomain ([162.219.34.248]) by smtp.gmail.com with ESMTPSA id cu21sm421018pjb.50.2022.02.16.19.15.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 19:15:25 -0800 (PST) From: "Wang Jianchao (Kuaishou)" To: Jens Axboe Cc: Josef Bacik , Tejun Heo , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC V4 4/6] blk-iocost: make iocost pluggable Date: Thu, 17 Feb 2022 11:13:47 +0800 Message-Id: <20220217031349.98561-5-jianchao.wan9@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220217031349.98561-1-jianchao.wan9@gmail.com> References: <20220217031349.98561-1-jianchao.wan9@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk-iocost pluggable. Then we can close or open it through /sys/block/xxx/queue/qos. Signed-off-by: Wang Jianchao (Kuaishou) --- block/blk-iocost.c | 52 ++++++++++++++++++++++++++---------------- block/blk-mq-debugfs.c | 2 -- block/blk-rq-qos.h | 1 - 3 files changed, 32 insertions(+), 23 deletions(-) diff --git a/block/blk-iocost.c b/block/blk-iocost.c index 769b64394298..5a3a45985b49 100644 --- a/block/blk-iocost.c +++ b/block/blk-iocost.c @@ -660,9 +660,10 @@ static struct ioc *rqos_to_ioc(struct rq_qos *rqos) return container_of(rqos, struct ioc, rqos); } +static struct rq_qos_ops ioc_rqos_ops; static struct ioc *q_to_ioc(struct request_queue *q) { - return rqos_to_ioc(rq_qos_id(q, RQ_QOS_COST)); + return rqos_to_ioc(rq_qos_by_id(q, ioc_rqos_ops.id)); } static const char *q_name(struct request_queue *q) @@ -2810,6 +2811,7 @@ static void ioc_rqos_exit(struct rq_qos *rqos) struct ioc *ioc = rqos_to_ioc(rqos); blkcg_deactivate_policy(rqos->q, &blkcg_policy_iocost); + rq_qos_deactivate(rqos); spin_lock_irq(&ioc->lock); ioc->running = IOC_STOP; @@ -2820,13 +2822,18 @@ static void ioc_rqos_exit(struct rq_qos *rqos) kfree(ioc); } +static int blk_iocost_init(struct request_queue *q); + static struct rq_qos_ops ioc_rqos_ops = { + .name = "blk-iocost", + .flags = RQOS_FLAG_CGRP_POL, .throttle = ioc_rqos_throttle, .merge = ioc_rqos_merge, .done_bio = ioc_rqos_done_bio, .done = ioc_rqos_done, .queue_depth_changed = ioc_rqos_queue_depth_changed, .exit = ioc_rqos_exit, + .init = blk_iocost_init, }; static int blk_iocost_init(struct request_queue *q) @@ -2856,10 +2863,7 @@ static int blk_iocost_init(struct request_queue *q) } rqos = &ioc->rqos; - rqos->id = RQ_QOS_COST; - rqos->ops = &ioc_rqos_ops; - rqos->q = q; - + rq_qos_activate(q, rqos, &ioc_rqos_ops); spin_lock_init(&ioc->lock); timer_setup(&ioc->timer, ioc_timer_fn, 0); INIT_LIST_HEAD(&ioc->active_iocgs); @@ -2883,10 +2887,9 @@ static int blk_iocost_init(struct request_queue *q) * called before policy activation completion, can't assume that the * target bio has an iocg associated and need to test for NULL iocg. */ - rq_qos_add(q, rqos); ret = blkcg_activate_policy(q, &blkcg_policy_iocost); if (ret) { - rq_qos_del(q, rqos); + rq_qos_deactivate(rqos); free_percpu(ioc->pcpu_stat); kfree(ioc); return ret; @@ -3162,6 +3165,7 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input, size_t nbytes, loff_t off) { struct block_device *bdev; + struct rq_qos *rqos; struct ioc *ioc; u32 qos[NR_QOS_PARAMS]; bool enable, user; @@ -3172,12 +3176,10 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input, if (IS_ERR(bdev)) return PTR_ERR(bdev); - ioc = q_to_ioc(bdev_get_queue(bdev)); - if (!ioc) { - ret = blk_iocost_init(bdev_get_queue(bdev)); - if (ret) - goto err; - ioc = q_to_ioc(bdev_get_queue(bdev)); + rqos = rq_qos_get(bdev_get_queue(bdev), ioc_rqos_ops.id); + if (!rqos) { + ret = -EOPNOTSUPP; + goto err; } spin_lock_irq(&ioc->lock); @@ -3329,6 +3331,7 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input, size_t nbytes, loff_t off) { struct block_device *bdev; + struct rq_qos *rqos; struct ioc *ioc; u64 u[NR_I_LCOEFS]; bool user; @@ -3339,12 +3342,10 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input, if (IS_ERR(bdev)) return PTR_ERR(bdev); - ioc = q_to_ioc(bdev_get_queue(bdev)); - if (!ioc) { - ret = blk_iocost_init(bdev_get_queue(bdev)); - if (ret) - goto err; - ioc = q_to_ioc(bdev_get_queue(bdev)); + rqos = rq_qos_get(bdev_get_queue(bdev), ioc_rqos_ops.id); + if (!rqos) { + ret = -EOPNOTSUPP; + goto err; } spin_lock_irq(&ioc->lock); @@ -3441,12 +3442,23 @@ static struct blkcg_policy blkcg_policy_iocost = { static int __init ioc_init(void) { - return blkcg_policy_register(&blkcg_policy_iocost); + int ret; + + ret = rq_qos_register(&ioc_rqos_ops); + if (ret) + return ret; + + ret = blkcg_policy_register(&blkcg_policy_iocost); + if (ret) + rq_qos_unregister(&ioc_rqos_ops); + + return ret; } static void __exit ioc_exit(void) { blkcg_policy_unregister(&blkcg_policy_iocost); + rq_qos_unregister(&ioc_rqos_ops); } module_init(ioc_init); diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 918870b8de5b..45da42e9e242 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -826,8 +826,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q) static const char *rq_qos_id_to_name(enum rq_qos_id id) { switch (id) { - case RQ_QOS_COST: - return "cost"; case RQ_QOS_IOPRIO: return "ioprio"; } diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 2a919db52fef..6d691527cb51 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -14,7 +14,6 @@ struct blk_mq_debugfs_attr; enum rq_qos_id { - RQ_QOS_COST, RQ_QOS_IOPRIO, }; From patchwork Thu Feb 17 03:13:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Jianchao X-Patchwork-Id: 12749271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7070CC433EF for ; Thu, 17 Feb 2022 03:17:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231585AbiBQDRl (ORCPT ); Wed, 16 Feb 2022 22:17:41 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:57624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232632AbiBQDPn (ORCPT ); Wed, 16 Feb 2022 22:15:43 -0500 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3C3127828D; Wed, 16 Feb 2022 19:15:28 -0800 (PST) Received: by mail-pg1-x534.google.com with SMTP id z4so3799836pgh.12; Wed, 16 Feb 2022 19:15:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lCn1Z7Sr5pqXCxgIT7QgxabiSSMXF+4K1Vo1IntnwpI=; b=ML3mk5Un18iBGf/QDqUJfoaccGD1gORVbfmF98z6KNGjyvndnVGRHLHVC7J8Ar73Dk OXYCKcRAkiz8HGckgA7/K0WCdaB4dPRJgFtx99IB/nwBYJI1Cu2g42dlsS8rDtHVLzy+ ec9W7rTQNsoO5MJQq3gB2MR7RlmuFNzyTIQLfZfK4rXgGei39mxqog+RUFsnkYTahhix sFCuuPnf239XZm/bfmwzLmZP9PklwhV8y6SEYS/sbVU7f78qMvH9IXUr6gYJJdTrRqyJ cN5HOdwurUHBLa4JnqynAW7+iw5LVz2MLZ/MWFtx26zoVXv5fidx+v0o4852ihF9IFdu nLKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lCn1Z7Sr5pqXCxgIT7QgxabiSSMXF+4K1Vo1IntnwpI=; b=4ktj5FnntElgyY8DnQRlDm72Jm53cB9RBEmOdV1nsG8S+hkX8SLImfqKFVrlbE0vX3 kAdVj9coux8ZP3ptXPxOKCDDhYFj1CZPV5NUwnGETbRjOIv89zLVbaD35bPwtXXnToh5 t3bS64wkaei96Utg3h2ZNxdkkl1OgFQGwW28nnIClPyw1Rin2W1gdiTEW9/FKtaBGwje EjB2t9u1i6Y40AvCM1K87WcO9Kj7eEUrxWpDVaJpEbU5/tgvSaB/0NFIK1JgAaQmlgfL R1VShQtck/cHqx4mxm3Ms4KPXwi5Ldici4KXPY5KF8A7/pkIY2dAzCGAm1Kva/VpbnNU GSGQ== X-Gm-Message-State: AOAM533uLtXShu4UObdg9k+aKpeSV0VhD/mx9+lqPmHKCFQcW9tFM6Ei Sc7jmwBAgB8HIwuUn6qmpYA= X-Google-Smtp-Source: ABdhPJww5bOq622E66KYDQvmiNFpVFN8B6QYiCCavjKpSEXd0UwOnRb6r9SXkHkNQdeoyqxf6HQCDg== X-Received: by 2002:a63:5c09:0:b0:370:6303:1464 with SMTP id q9-20020a635c09000000b0037063031464mr849049pgb.351.1645067728261; Wed, 16 Feb 2022 19:15:28 -0800 (PST) Received: from localhost.localdomain ([162.219.34.248]) by smtp.gmail.com with ESMTPSA id cu21sm421018pjb.50.2022.02.16.19.15.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 19:15:27 -0800 (PST) From: "Wang Jianchao (Kuaishou)" To: Jens Axboe Cc: Josef Bacik , Tejun Heo , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC V4 5/6] blk-ioprio: make ioprio pluggable and modular Date: Thu, 17 Feb 2022 11:13:48 +0800 Message-Id: <20220217031349.98561-6-jianchao.wan9@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220217031349.98561-1-jianchao.wan9@gmail.com> References: <20220217031349.98561-1-jianchao.wan9@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make blk-ioprio pluggable and modular. Then we can close or open it through /sys/block/xxx/queue/qos and rmmod the module if we don't need it which can release one blkcg policy slot. The blk-ioprio.h is removed as we needn't to make blk_ioprio_init() external. Signed-off-by: Wang Jianchao (Kuaishou) --- block/Kconfig | 2 +- block/blk-cgroup.c | 5 ----- block/blk-ioprio.c | 51 ++++++++++++++++++++++++++++-------------- block/blk-ioprio.h | 19 ---------------- block/blk-mq-debugfs.c | 4 ---- block/blk-rq-qos.c | 2 +- block/blk-rq-qos.h | 2 +- 7 files changed, 37 insertions(+), 48 deletions(-) delete mode 100644 block/blk-ioprio.h diff --git a/block/Kconfig b/block/Kconfig index d5d4197b7ed2..9cc8e4688953 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -145,7 +145,7 @@ config BLK_CGROUP_IOCOST their share of the overall weight distribution. config BLK_CGROUP_IOPRIO - bool "Cgroup I/O controller for assigning an I/O priority class" + tristate "Cgroup I/O controller for assigning an I/O priority class" depends on BLK_CGROUP help Enable the .prio interface for assigning an I/O priority class to diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 3ae2aa557aef..f617f7ba311d 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -32,7 +32,6 @@ #include #include #include "blk.h" -#include "blk-ioprio.h" #include "blk-throttle.h" /* @@ -1195,10 +1194,6 @@ int blkcg_init_queue(struct request_queue *q) if (preloaded) radix_tree_preload_end(); - ret = blk_ioprio_init(q); - if (ret) - goto err_destroy_all; - ret = blk_throtl_init(q); if (ret) goto err_destroy_all; diff --git a/block/blk-ioprio.c b/block/blk-ioprio.c index 2e7f10e1c03f..1ec56f617202 100644 --- a/block/blk-ioprio.c +++ b/block/blk-ioprio.c @@ -17,7 +17,6 @@ #include #include #include -#include "blk-ioprio.h" #include "blk-rq-qos.h" /** @@ -216,15 +215,24 @@ static void blkcg_ioprio_exit(struct rq_qos *rqos) container_of(rqos, typeof(*blkioprio_blkg), rqos); blkcg_deactivate_policy(rqos->q, &ioprio_policy); + rq_qos_deactivate(rqos); kfree(blkioprio_blkg); } +static int blk_ioprio_init(struct request_queue *q); + static struct rq_qos_ops blkcg_ioprio_ops = { +#if IS_MODULE(CONFIG_BLK_CGROUP_IOPRIO) + .owner = THIS_MODULE, +#endif + .flags = RQOS_FLAG_CGRP_POL, + .name = "blk-ioprio", .track = blkcg_ioprio_track, .exit = blkcg_ioprio_exit, + .init = blk_ioprio_init, }; -int blk_ioprio_init(struct request_queue *q) +static int blk_ioprio_init(struct request_queue *q) { struct blk_ioprio *blkioprio_blkg; struct rq_qos *rqos; @@ -234,36 +242,45 @@ int blk_ioprio_init(struct request_queue *q) if (!blkioprio_blkg) return -ENOMEM; + /* + * No need to worry ioprio_blkcg_from_css return NULL as + * the queue is frozen right now. + */ + rqos = &blkioprio_blkg->rqos; + rq_qos_activate(q, rqos, &blkcg_ioprio_ops); + ret = blkcg_activate_policy(q, &ioprio_policy); if (ret) { + rq_qos_deactivate(rqos); kfree(blkioprio_blkg); - return ret; } - rqos = &blkioprio_blkg->rqos; - rqos->id = RQ_QOS_IOPRIO; - rqos->ops = &blkcg_ioprio_ops; - rqos->q = q; - - /* - * Registering the rq-qos policy after activating the blk-cgroup - * policy guarantees that ioprio_blkcg_from_bio(bio) != NULL in the - * rq-qos callbacks. - */ - rq_qos_add(q, rqos); - - return 0; + return ret; } static int __init ioprio_init(void) { - return blkcg_policy_register(&ioprio_policy); + int ret; + + ret = rq_qos_register(&blkcg_ioprio_ops); + if (ret) + return ret; + + ret = blkcg_policy_register(&ioprio_policy); + if (ret) + rq_qos_unregister(&blkcg_ioprio_ops); + + return ret; } static void __exit ioprio_exit(void) { blkcg_policy_unregister(&ioprio_policy); + rq_qos_unregister(&blkcg_ioprio_ops); } module_init(ioprio_init); module_exit(ioprio_exit); +MODULE_AUTHOR("Bart Van Assche"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Cgroup I/O controller for assigning an I/O priority class"); diff --git a/block/blk-ioprio.h b/block/blk-ioprio.h deleted file mode 100644 index a7785c2f1aea..000000000000 --- a/block/blk-ioprio.h +++ /dev/null @@ -1,19 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ - -#ifndef _BLK_IOPRIO_H_ -#define _BLK_IOPRIO_H_ - -#include - -struct request_queue; - -#ifdef CONFIG_BLK_CGROUP_IOPRIO -int blk_ioprio_init(struct request_queue *q); -#else -static inline int blk_ioprio_init(struct request_queue *q) -{ - return 0; -} -#endif - -#endif /* _BLK_IOPRIO_H_ */ diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 45da42e9e242..cbbd668029a1 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -825,10 +825,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q) static const char *rq_qos_id_to_name(enum rq_qos_id id) { - switch (id) { - case RQ_QOS_IOPRIO: - return "ioprio"; - } return "unknown"; } diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c index db13581ae878..23cb7a3fa9b2 100644 --- a/block/blk-rq-qos.c +++ b/block/blk-rq-qos.c @@ -452,7 +452,7 @@ int rq_qos_register(struct rq_qos_ops *ops) goto out; } - start = RQ_QOS_IOPRIO + 1; + start = 1; ret = ida_simple_get(&rq_qos_ida, start, INT_MAX, GFP_KERNEL); if (ret < 0) goto out; diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 6d691527cb51..bba829bbb461 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -14,7 +14,7 @@ struct blk_mq_debugfs_attr; enum rq_qos_id { - RQ_QOS_IOPRIO, + RQ_QOS_UNUSED, }; struct rq_wait { From patchwork Thu Feb 17 03:13:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Jianchao X-Patchwork-Id: 12749269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 861A8C43217 for ; Thu, 17 Feb 2022 03:15:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232588AbiBQDPw (ORCPT ); Wed, 16 Feb 2022 22:15:52 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:58016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232679AbiBQDPp (ORCPT ); Wed, 16 Feb 2022 22:15:45 -0500 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C66D2727B1; Wed, 16 Feb 2022 19:15:31 -0800 (PST) Received: by mail-pg1-x52a.google.com with SMTP id l73so3806305pge.11; Wed, 16 Feb 2022 19:15:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nC+1B3PeLDhJAaHINUXdTNpBlJO6pz1AmTjWajsQpzQ=; b=HG9bk6ncJCZHDKIJ0KNwMKOB8Up68PnbEzjz6VcqlokqzCPQ3AgBGCFuaWB5gKblWA qCDLakFLP2aWHLxIaLDwL9PV8v+PIDFtWBw8P+KtxVyQDCqhF7i4MwYduZURLsvQLbOd pO7Ssw72dJaKItcs8OddSb5U7JuBN9iEmxHJSxj7AQlLeeiocxUPgTW9cP9Ma0zvi0aF Gp6MgwJ5f/csPTvlh7qlVYDCJQnuS4wX5O/v1K8jB0/01MMbQGfl7XVIFC2iYCtq9BV7 eFSARbYZeVC0NOxamSGPZiw7FtgGUdbvYlFxq5edT4h+1Gs649+kuiERXDhwna7GKwXG fPrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nC+1B3PeLDhJAaHINUXdTNpBlJO6pz1AmTjWajsQpzQ=; b=6wfPbHDzc0NfO6L2vYrjUbxoMMMcRVLUr9o8qCwY6+1EdiQM3j41k5PLL5RNu3I7x7 mtqTfbK692zWmb/Y6fVteIEKXNEKeBPbdZhr2n6JkuioyC9zI8ta7CSi/ugUV3db/sJh dm9NYIlNfw6NlIn0nEqwvdnuiyg/khPO8UOOizo5CEjCl6EWo/KCP+44l8ZGa/qq/Qws 07XrqvDa4PrmJs/byaZUv8YblXMEi6FjGOucHj9IbhVpRlZYGydtlIrsrEA5tmf2XhOx 5/2fCU31BG8MG4QmzHO0RNTOn+TGNOgp4emqw/ZBEskaKukpXXGg4L5LYzbZZZxg5t64 B8hw== X-Gm-Message-State: AOAM533/LnUIN5iAeMd6MFTdf8/GO3e6qt5KB/IYHdvd18jt5J5ytZ4N 9eIH0MHpiWP3A3RBCStK3FA= X-Google-Smtp-Source: ABdhPJyZK6XRMixCNv7nozmyzTljSlGuuyWxQUtQ0wRCHHQyuoWFbU+DiCjixgjbUpf40wO6PWzDDg== X-Received: by 2002:a05:6a00:150a:b0:4cc:f6a6:1bc6 with SMTP id q10-20020a056a00150a00b004ccf6a61bc6mr1177769pfu.74.1645067730943; Wed, 16 Feb 2022 19:15:30 -0800 (PST) Received: from localhost.localdomain ([162.219.34.248]) by smtp.gmail.com with ESMTPSA id cu21sm421018pjb.50.2022.02.16.19.15.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 19:15:30 -0800 (PST) From: "Wang Jianchao (Kuaishou)" To: Jens Axboe Cc: Josef Bacik , Tejun Heo , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC V4 6/6] blk: export the sysfs for switching qos Date: Thu, 17 Feb 2022 11:13:49 +0800 Message-Id: <20220217031349.98561-7-jianchao.wan9@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220217031349.98561-1-jianchao.wan9@gmail.com> References: <20220217031349.98561-1-jianchao.wan9@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org All of the blk-rq-qos policies has been changed to use the new interfaces. Thus we can export the sysfs interface, namely /sys/block/xxx/queue/qos and get rid of the unused interfaces. Signed-off-by: Wang Jianchao (Kuaishou) --- block/blk-mq-debugfs.c | 10 +------ block/blk-rq-qos.h | 63 +----------------------------------------- block/blk-sysfs.c | 2 ++ 3 files changed, 4 insertions(+), 71 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index cbbd668029a1..3defd5cb1cea 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -823,11 +823,6 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q) q->sched_debugfs_dir = NULL; } -static const char *rq_qos_id_to_name(enum rq_qos_id id) -{ - return "unknown"; -} - void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos) { debugfs_remove_recursive(rqos->debugfs_dir); @@ -837,9 +832,6 @@ void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos) void blk_mq_debugfs_register_rqos(struct rq_qos *rqos) { struct request_queue *q = rqos->q; - const char *dir_name; - - dir_name = rqos->ops->name ? rqos->ops->name : rq_qos_id_to_name(rqos->id); if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs) return; @@ -848,7 +840,7 @@ void blk_mq_debugfs_register_rqos(struct rq_qos *rqos) q->rqos_debugfs_dir = debugfs_create_dir("rqos", q->debugfs_dir); - rqos->debugfs_dir = debugfs_create_dir(dir_name, + rqos->debugfs_dir = debugfs_create_dir(rqos->ops->name, rqos->q->rqos_debugfs_dir); debugfs_create_files(rqos->debugfs_dir, rqos, rqos->ops->debugfs_attrs); diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index bba829bbb461..262d221794f5 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -13,10 +13,6 @@ struct blk_mq_debugfs_attr; -enum rq_qos_id { - RQ_QOS_UNUSED, -}; - struct rq_wait { wait_queue_head_t wait; atomic_t inflight; @@ -25,7 +21,7 @@ struct rq_wait { struct rq_qos { const struct rq_qos_ops *ops; struct request_queue *q; - enum rq_qos_id id; + int id; refcount_t ref; wait_queue_head_t waitq; bool dying; @@ -69,17 +65,6 @@ struct rq_depth { unsigned int default_depth; }; -static inline struct rq_qos *rq_qos_id(struct request_queue *q, - enum rq_qos_id id) -{ - struct rq_qos *rqos; - for (rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->id == id) - break; - } - return rqos; -} - static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id) { struct rq_qos *rqos; @@ -99,52 +84,6 @@ static inline void rq_wait_init(struct rq_wait *rq_wait) init_waitqueue_head(&rq_wait->wait); } -static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos) -{ - /* - * No IO can be in-flight when adding rqos, so freeze queue, which - * is fine since we only support rq_qos for blk-mq queue. - * - * Reuse ->queue_lock for protecting against other concurrent - * rq_qos adding/deleting - */ - blk_mq_freeze_queue(q); - - spin_lock_irq(&q->queue_lock); - rqos->next = q->rq_qos; - q->rq_qos = rqos; - spin_unlock_irq(&q->queue_lock); - - blk_mq_unfreeze_queue(q); - - if (rqos->ops->debugfs_attrs) - blk_mq_debugfs_register_rqos(rqos); -} - -static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos) -{ - struct rq_qos **cur; - - /* - * See comment in rq_qos_add() about freezing queue & using - * ->queue_lock. - */ - blk_mq_freeze_queue(q); - - spin_lock_irq(&q->queue_lock); - for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) { - if (*cur == rqos) { - *cur = rqos->next; - break; - } - } - spin_unlock_irq(&q->queue_lock); - - blk_mq_unfreeze_queue(q); - - blk_mq_debugfs_unregister_rqos(rqos); -} - int rq_qos_register(struct rq_qos_ops *ops); void rq_qos_unregister(struct rq_qos_ops *ops); void rq_qos_activate(struct request_queue *q, diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 9f32882ceb2f..c02747db4e3b 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -574,6 +574,7 @@ QUEUE_RO_ENTRY(queue_max_segments, "max_segments"); QUEUE_RO_ENTRY(queue_max_integrity_segments, "max_integrity_segments"); QUEUE_RO_ENTRY(queue_max_segment_size, "max_segment_size"); QUEUE_RW_ENTRY(elv_iosched, "scheduler"); +QUEUE_RW_ENTRY(queue_qos, "qos"); QUEUE_RO_ENTRY(queue_logical_block_size, "logical_block_size"); QUEUE_RO_ENTRY(queue_physical_block_size, "physical_block_size"); @@ -633,6 +634,7 @@ static struct attribute *queue_attrs[] = { &queue_max_integrity_segments_entry.attr, &queue_max_segment_size_entry.attr, &elv_iosched_entry.attr, + &queue_qos_entry.attr, &queue_hw_sector_size_entry.attr, &queue_logical_block_size_entry.attr, &queue_physical_block_size_entry.attr,